00:00:00.001 Started by upstream project "autotest-per-patch" build number 127140 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "jbp-per-patch" build number 24287 00:00:00.001 originally caused by: 00:00:00.002 Started by user sys_sgci 00:00:00.028 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/short-fuzz-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.029 The recommended git tool is: git 00:00:00.029 using credential 00000000-0000-0000-0000-000000000002 00:00:00.031 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/short-fuzz-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.055 Fetching changes from the remote Git repository 00:00:00.057 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.106 Using shallow fetch with depth 1 00:00:00.106 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.106 > git --version # timeout=10 00:00:00.184 > git --version # 'git version 2.39.2' 00:00:00.184 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.245 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.245 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/changes/05/24305/3 # timeout=5 00:00:04.192 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.205 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.219 Checking out Revision b96e2fd4fd67f35d13e68ed8cd11d67d819ff3fc (FETCH_HEAD) 00:00:04.219 > git config core.sparsecheckout # timeout=10 00:00:04.232 > git read-tree -mu HEAD # timeout=10 00:00:04.251 > git checkout -f b96e2fd4fd67f35d13e68ed8cd11d67d819ff3fc # timeout=5 00:00:04.272 Commit message: "jjb/jobs: add SPDK_TEST_SETUP flag into configuration" 00:00:04.272 > git rev-list --no-walk f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08 # timeout=10 00:00:04.410 [Pipeline] Start of Pipeline 00:00:04.423 [Pipeline] library 00:00:04.424 Loading library shm_lib@master 00:00:04.424 Library shm_lib@master is cached. Copying from home. 00:00:04.443 [Pipeline] node 00:00:04.457 Running on WFP13 in /var/jenkins/workspace/short-fuzz-phy-autotest 00:00:04.459 [Pipeline] { 00:00:04.471 [Pipeline] catchError 00:00:04.473 [Pipeline] { 00:00:04.484 [Pipeline] wrap 00:00:04.494 [Pipeline] { 00:00:04.502 [Pipeline] stage 00:00:04.504 [Pipeline] { (Prologue) 00:00:04.712 [Pipeline] sh 00:00:04.999 + logger -p user.info -t JENKINS-CI 00:00:05.020 [Pipeline] echo 00:00:05.022 Node: WFP13 00:00:05.029 [Pipeline] sh 00:00:05.330 [Pipeline] setCustomBuildProperty 00:00:05.344 [Pipeline] echo 00:00:05.346 Cleanup processes 00:00:05.354 [Pipeline] sh 00:00:05.636 + sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:05.636 286985 sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:05.651 [Pipeline] sh 00:00:05.937 ++ sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:05.937 ++ grep -v 'sudo pgrep' 00:00:05.937 ++ awk '{print $1}' 00:00:05.937 + sudo kill -9 00:00:05.937 + true 00:00:05.951 [Pipeline] cleanWs 00:00:05.960 [WS-CLEANUP] Deleting project workspace... 00:00:05.960 [WS-CLEANUP] Deferred wipeout is used... 00:00:05.966 [WS-CLEANUP] done 00:00:05.971 [Pipeline] setCustomBuildProperty 00:00:05.987 [Pipeline] sh 00:00:06.269 + sudo git config --global --replace-all safe.directory '*' 00:00:06.373 [Pipeline] httpRequest 00:00:06.397 [Pipeline] echo 00:00:06.399 Sorcerer 10.211.164.101 is alive 00:00:06.409 [Pipeline] httpRequest 00:00:06.414 HttpMethod: GET 00:00:06.414 URL: http://10.211.164.101/packages/jbp_b96e2fd4fd67f35d13e68ed8cd11d67d819ff3fc.tar.gz 00:00:06.415 Sending request to url: http://10.211.164.101/packages/jbp_b96e2fd4fd67f35d13e68ed8cd11d67d819ff3fc.tar.gz 00:00:06.419 Response Code: HTTP/1.1 200 OK 00:00:06.419 Success: Status code 200 is in the accepted range: 200,404 00:00:06.420 Saving response body to /var/jenkins/workspace/short-fuzz-phy-autotest/jbp_b96e2fd4fd67f35d13e68ed8cd11d67d819ff3fc.tar.gz 00:00:07.530 [Pipeline] sh 00:00:07.805 + tar --no-same-owner -xf jbp_b96e2fd4fd67f35d13e68ed8cd11d67d819ff3fc.tar.gz 00:00:07.823 [Pipeline] httpRequest 00:00:07.841 [Pipeline] echo 00:00:07.842 Sorcerer 10.211.164.101 is alive 00:00:07.849 [Pipeline] httpRequest 00:00:07.853 HttpMethod: GET 00:00:07.853 URL: http://10.211.164.101/packages/spdk_70425709083377aa0c23e3a0918902ddf3d34357.tar.gz 00:00:07.854 Sending request to url: http://10.211.164.101/packages/spdk_70425709083377aa0c23e3a0918902ddf3d34357.tar.gz 00:00:07.862 Response Code: HTTP/1.1 200 OK 00:00:07.863 Success: Status code 200 is in the accepted range: 200,404 00:00:07.863 Saving response body to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk_70425709083377aa0c23e3a0918902ddf3d34357.tar.gz 00:01:41.219 [Pipeline] sh 00:01:41.504 + tar --no-same-owner -xf spdk_70425709083377aa0c23e3a0918902ddf3d34357.tar.gz 00:01:44.055 [Pipeline] sh 00:01:44.338 + git -C spdk log --oneline -n5 00:01:44.338 704257090 lib/reduce: fix the incorrect calculation method for the number of io_unit required for metadata. 00:01:44.338 fc2398dfa raid: clear base bdev configure_cb after executing 00:01:44.339 5558f3f50 raid: complete bdev_raid_create after sb is written 00:01:44.339 d005e023b raid: fix empty slot not updated in sb after resize 00:01:44.339 f41dbc235 nvme: always specify CC_CSS_NVM when CAP_CSS_IOCS is not set 00:01:44.355 [Pipeline] sh 00:01:44.639 + ip --json address 00:01:44.653 [Pipeline] readJSON 00:01:44.674 [Pipeline] echo 00:01:44.676 NIC with Beetle address is already setup (192.168.10.10) 00:01:44.681 [Pipeline] withCredentials 00:01:44.698 Masking supported pattern matches of $beetle_key 00:01:44.700 [Pipeline] { 00:01:44.708 [Pipeline] sh 00:01:44.991 + ssh -i **** -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -o ConnectionAttempts=5 root@192.168.10.11 'for gpio in {0..10}; do Beetle --SetGpio "$gpio" HIGH; done' 00:01:45.561 Warning: Permanently added '192.168.10.11' (ED25519) to the list of known hosts. 00:01:48.107 [Pipeline] } 00:01:48.135 [Pipeline] // withCredentials 00:01:48.142 [Pipeline] } 00:01:48.164 [Pipeline] // stage 00:01:48.174 [Pipeline] stage 00:01:48.177 [Pipeline] { (Prepare) 00:01:48.194 [Pipeline] writeFile 00:01:48.210 [Pipeline] sh 00:01:48.494 + logger -p user.info -t JENKINS-CI 00:01:48.507 [Pipeline] sh 00:01:48.790 + logger -p user.info -t JENKINS-CI 00:01:48.802 [Pipeline] sh 00:01:49.086 + cat autorun-spdk.conf 00:01:49.086 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:49.086 SPDK_TEST_FUZZER_SHORT=1 00:01:49.086 SPDK_TEST_FUZZER=1 00:01:49.086 SPDK_TEST_SETUP=1 00:01:49.086 SPDK_RUN_UBSAN=1 00:01:49.094 RUN_NIGHTLY=0 00:01:49.097 [Pipeline] readFile 00:01:49.121 [Pipeline] withEnv 00:01:49.124 [Pipeline] { 00:01:49.139 [Pipeline] sh 00:01:49.425 + set -ex 00:01:49.425 + [[ -f /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf ]] 00:01:49.425 + source /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf 00:01:49.425 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:49.425 ++ SPDK_TEST_FUZZER_SHORT=1 00:01:49.425 ++ SPDK_TEST_FUZZER=1 00:01:49.425 ++ SPDK_TEST_SETUP=1 00:01:49.425 ++ SPDK_RUN_UBSAN=1 00:01:49.425 ++ RUN_NIGHTLY=0 00:01:49.425 + case $SPDK_TEST_NVMF_NICS in 00:01:49.425 + DRIVERS= 00:01:49.425 + [[ -n '' ]] 00:01:49.425 + exit 0 00:01:49.435 [Pipeline] } 00:01:49.454 [Pipeline] // withEnv 00:01:49.460 [Pipeline] } 00:01:49.479 [Pipeline] // stage 00:01:49.489 [Pipeline] catchError 00:01:49.491 [Pipeline] { 00:01:49.506 [Pipeline] timeout 00:01:49.506 Timeout set to expire in 30 min 00:01:49.507 [Pipeline] { 00:01:49.524 [Pipeline] stage 00:01:49.526 [Pipeline] { (Tests) 00:01:49.538 [Pipeline] sh 00:01:49.818 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/short-fuzz-phy-autotest 00:01:49.818 ++ readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest 00:01:49.818 + DIR_ROOT=/var/jenkins/workspace/short-fuzz-phy-autotest 00:01:49.818 + [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest ]] 00:01:49.818 + DIR_SPDK=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:01:49.818 + DIR_OUTPUT=/var/jenkins/workspace/short-fuzz-phy-autotest/output 00:01:49.818 + [[ -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk ]] 00:01:49.818 + [[ ! -d /var/jenkins/workspace/short-fuzz-phy-autotest/output ]] 00:01:49.818 + mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/output 00:01:49.818 + [[ -d /var/jenkins/workspace/short-fuzz-phy-autotest/output ]] 00:01:49.818 + [[ short-fuzz-phy-autotest == pkgdep-* ]] 00:01:49.818 + cd /var/jenkins/workspace/short-fuzz-phy-autotest 00:01:49.818 + source /etc/os-release 00:01:49.818 ++ NAME='Fedora Linux' 00:01:49.818 ++ VERSION='38 (Cloud Edition)' 00:01:49.818 ++ ID=fedora 00:01:49.818 ++ VERSION_ID=38 00:01:49.818 ++ VERSION_CODENAME= 00:01:49.818 ++ PLATFORM_ID=platform:f38 00:01:49.818 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:49.818 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:49.818 ++ LOGO=fedora-logo-icon 00:01:49.818 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:49.818 ++ HOME_URL=https://fedoraproject.org/ 00:01:49.818 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:49.818 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:49.818 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:49.818 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:49.818 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:49.818 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:49.818 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:49.818 ++ SUPPORT_END=2024-05-14 00:01:49.818 ++ VARIANT='Cloud Edition' 00:01:49.818 ++ VARIANT_ID=cloud 00:01:49.818 + uname -a 00:01:49.818 Linux spdk-wfp-13 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:49.818 + sudo /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh status 00:01:53.107 Hugepages 00:01:53.107 node hugesize free / total 00:01:53.107 node0 1048576kB 0 / 0 00:01:53.107 node0 2048kB 0 / 0 00:01:53.107 node1 1048576kB 0 / 0 00:01:53.107 node1 2048kB 0 / 0 00:01:53.107 00:01:53.107 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:53.107 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:01:53.107 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:01:53.107 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:01:53.107 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:01:53.107 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:01:53.107 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:01:53.107 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:01:53.107 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:01:53.107 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:01:53.107 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:01:53.107 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:01:53.107 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:01:53.107 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:01:53.107 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:01:53.107 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:01:53.107 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:01:53.107 NVMe 0000:dc:00.0 8086 0953 1 nvme nvme3 nvme3n1 00:01:53.107 NVMe 0000:dd:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:01:53.107 NVMe 0000:de:00.0 8086 0953 1 nvme nvme2 nvme2n1 00:01:53.107 NVMe 0000:df:00.0 8086 0a54 1 nvme nvme1 nvme1n1 00:01:53.107 + rm -f /tmp/spdk-ld-path 00:01:53.107 + source autorun-spdk.conf 00:01:53.107 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:53.107 ++ SPDK_TEST_FUZZER_SHORT=1 00:01:53.107 ++ SPDK_TEST_FUZZER=1 00:01:53.107 ++ SPDK_TEST_SETUP=1 00:01:53.107 ++ SPDK_RUN_UBSAN=1 00:01:53.107 ++ RUN_NIGHTLY=0 00:01:53.107 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:53.107 + [[ -n '' ]] 00:01:53.107 + sudo git config --global --add safe.directory /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:01:53.107 + for M in /var/spdk/build-*-manifest.txt 00:01:53.107 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:53.107 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/short-fuzz-phy-autotest/output/ 00:01:53.107 + for M in /var/spdk/build-*-manifest.txt 00:01:53.107 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:53.107 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/short-fuzz-phy-autotest/output/ 00:01:53.107 ++ uname 00:01:53.107 + [[ Linux == \L\i\n\u\x ]] 00:01:53.107 + sudo dmesg -T 00:01:53.107 + sudo dmesg --clear 00:01:53.107 + dmesg_pid=288167 00:01:53.107 + [[ Fedora Linux == FreeBSD ]] 00:01:53.107 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:53.107 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:53.107 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:53.107 + [[ -x /usr/src/fio-static/fio ]] 00:01:53.107 + export FIO_BIN=/usr/src/fio-static/fio 00:01:53.107 + FIO_BIN=/usr/src/fio-static/fio 00:01:53.107 + sudo dmesg -Tw 00:01:53.107 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\s\h\o\r\t\-\f\u\z\z\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:53.107 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:53.107 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:53.107 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:53.107 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:53.107 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:53.107 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:53.107 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:53.107 + spdk/autorun.sh /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf 00:01:53.107 Test configuration: 00:01:53.107 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:53.107 SPDK_TEST_FUZZER_SHORT=1 00:01:53.107 SPDK_TEST_FUZZER=1 00:01:53.107 SPDK_TEST_SETUP=1 00:01:53.107 SPDK_RUN_UBSAN=1 00:01:53.107 RUN_NIGHTLY=0 09:17:05 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:01:53.107 09:17:05 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:53.107 09:17:05 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:53.107 09:17:05 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:53.107 09:17:05 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:53.107 09:17:05 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:53.108 09:17:05 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:53.108 09:17:05 -- paths/export.sh@5 -- $ export PATH 00:01:53.108 09:17:05 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:53.108 09:17:05 -- common/autobuild_common.sh@446 -- $ out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output 00:01:53.108 09:17:05 -- common/autobuild_common.sh@447 -- $ date +%s 00:01:53.108 09:17:05 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721891825.XXXXXX 00:01:53.108 09:17:05 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721891825.Ddd4EJ 00:01:53.108 09:17:05 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:01:53.108 09:17:05 -- common/autobuild_common.sh@453 -- $ '[' -n '' ']' 00:01:53.108 09:17:05 -- common/autobuild_common.sh@456 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/' 00:01:53.108 09:17:05 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:53.108 09:17:05 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:53.108 09:17:05 -- common/autobuild_common.sh@463 -- $ get_config_params 00:01:53.108 09:17:05 -- common/autotest_common.sh@398 -- $ xtrace_disable 00:01:53.108 09:17:05 -- common/autotest_common.sh@10 -- $ set +x 00:01:53.108 09:17:05 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:01:53.108 09:17:05 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:01:53.108 09:17:05 -- pm/common@17 -- $ local monitor 00:01:53.108 09:17:05 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:53.108 09:17:05 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:53.108 09:17:05 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:53.108 09:17:05 -- pm/common@21 -- $ date +%s 00:01:53.108 09:17:05 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:53.108 09:17:05 -- pm/common@21 -- $ date +%s 00:01:53.108 09:17:05 -- pm/common@25 -- $ sleep 1 00:01:53.108 09:17:05 -- pm/common@21 -- $ date +%s 00:01:53.108 09:17:05 -- pm/common@21 -- $ date +%s 00:01:53.108 09:17:05 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721891825 00:01:53.108 09:17:05 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721891825 00:01:53.108 09:17:05 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721891825 00:01:53.108 09:17:05 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721891825 00:01:53.366 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721891825_collect-vmstat.pm.log 00:01:53.366 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721891825_collect-cpu-load.pm.log 00:01:53.366 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721891825_collect-cpu-temp.pm.log 00:01:53.366 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721891825_collect-bmc-pm.bmc.pm.log 00:01:54.303 09:17:06 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:01:54.303 09:17:06 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:54.303 09:17:06 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:54.303 09:17:06 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:01:54.303 09:17:06 -- spdk/autobuild.sh@16 -- $ date -u 00:01:54.303 Thu Jul 25 07:17:06 AM UTC 2024 00:01:54.303 09:17:06 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:54.303 v24.09-pre-321-g704257090 00:01:54.303 09:17:06 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:54.303 09:17:06 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:54.303 09:17:06 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:54.303 09:17:06 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:01:54.303 09:17:06 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:54.303 09:17:06 -- common/autotest_common.sh@10 -- $ set +x 00:01:54.303 ************************************ 00:01:54.303 START TEST ubsan 00:01:54.303 ************************************ 00:01:54.303 09:17:06 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:01:54.303 using ubsan 00:01:54.303 00:01:54.303 real 0m0.000s 00:01:54.303 user 0m0.000s 00:01:54.303 sys 0m0.000s 00:01:54.303 09:17:06 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:01:54.303 09:17:06 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:54.303 ************************************ 00:01:54.303 END TEST ubsan 00:01:54.303 ************************************ 00:01:54.303 09:17:06 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:54.303 09:17:06 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:54.303 09:17:06 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:54.303 09:17:06 -- spdk/autobuild.sh@51 -- $ [[ 1 -eq 1 ]] 00:01:54.303 09:17:06 -- spdk/autobuild.sh@52 -- $ llvm_precompile 00:01:54.303 09:17:06 -- common/autobuild_common.sh@435 -- $ run_test autobuild_llvm_precompile _llvm_precompile 00:01:54.304 09:17:06 -- common/autotest_common.sh@1101 -- $ '[' 2 -le 1 ']' 00:01:54.304 09:17:06 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:54.304 09:17:06 -- common/autotest_common.sh@10 -- $ set +x 00:01:54.304 ************************************ 00:01:54.304 START TEST autobuild_llvm_precompile 00:01:54.304 ************************************ 00:01:54.304 09:17:07 autobuild_llvm_precompile -- common/autotest_common.sh@1125 -- $ _llvm_precompile 00:01:54.304 09:17:07 autobuild_llvm_precompile -- common/autobuild_common.sh@32 -- $ clang --version 00:01:54.304 09:17:07 autobuild_llvm_precompile -- common/autobuild_common.sh@32 -- $ [[ clang version 16.0.6 (Fedora 16.0.6-3.fc38) 00:01:54.304 Target: x86_64-redhat-linux-gnu 00:01:54.304 Thread model: posix 00:01:54.304 InstalledDir: /usr/bin =~ version (([0-9]+).([0-9]+).([0-9]+)) ]] 00:01:54.304 09:17:07 autobuild_llvm_precompile -- common/autobuild_common.sh@33 -- $ clang_num=16 00:01:54.304 09:17:07 autobuild_llvm_precompile -- common/autobuild_common.sh@35 -- $ export CC=clang-16 00:01:54.304 09:17:07 autobuild_llvm_precompile -- common/autobuild_common.sh@35 -- $ CC=clang-16 00:01:54.304 09:17:07 autobuild_llvm_precompile -- common/autobuild_common.sh@36 -- $ export CXX=clang++-16 00:01:54.304 09:17:07 autobuild_llvm_precompile -- common/autobuild_common.sh@36 -- $ CXX=clang++-16 00:01:54.304 09:17:07 autobuild_llvm_precompile -- common/autobuild_common.sh@38 -- $ fuzzer_libs=(/usr/lib*/clang/@("$clang_num"|"$clang_version")/lib/*linux*/libclang_rt.fuzzer_no_main?(-x86_64).a) 00:01:54.304 09:17:07 autobuild_llvm_precompile -- common/autobuild_common.sh@39 -- $ fuzzer_lib=/usr/lib64/clang/16/lib/linux/libclang_rt.fuzzer_no_main-x86_64.a 00:01:54.304 09:17:07 autobuild_llvm_precompile -- common/autobuild_common.sh@40 -- $ [[ -e /usr/lib64/clang/16/lib/linux/libclang_rt.fuzzer_no_main-x86_64.a ]] 00:01:54.304 09:17:07 autobuild_llvm_precompile -- common/autobuild_common.sh@42 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-fuzzer=/usr/lib64/clang/16/lib/linux/libclang_rt.fuzzer_no_main-x86_64.a' 00:01:54.304 09:17:07 autobuild_llvm_precompile -- common/autobuild_common.sh@44 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-fuzzer=/usr/lib64/clang/16/lib/linux/libclang_rt.fuzzer_no_main-x86_64.a 00:01:54.562 Using default SPDK env in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:01:54.562 Using default DPDK in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:01:55.127 Using 'verbs' RDMA provider 00:02:08.276 Configuring ISA-L (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal.log)...done. 00:02:20.481 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:02:20.481 Creating mk/config.mk...done. 00:02:20.481 Creating mk/cc.flags.mk...done. 00:02:20.481 Type 'make' to build. 00:02:20.481 00:02:20.481 real 0m25.730s 00:02:20.481 user 0m12.133s 00:02:20.481 sys 0m12.618s 00:02:20.481 09:17:32 autobuild_llvm_precompile -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:02:20.481 09:17:32 autobuild_llvm_precompile -- common/autotest_common.sh@10 -- $ set +x 00:02:20.481 ************************************ 00:02:20.481 END TEST autobuild_llvm_precompile 00:02:20.481 ************************************ 00:02:20.481 09:17:32 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:20.481 09:17:32 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:20.481 09:17:32 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:20.481 09:17:32 -- spdk/autobuild.sh@62 -- $ [[ 1 -eq 1 ]] 00:02:20.481 09:17:32 -- spdk/autobuild.sh@64 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-fuzzer=/usr/lib64/clang/16/lib/linux/libclang_rt.fuzzer_no_main-x86_64.a 00:02:20.481 Using default SPDK env in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:02:20.481 Using default DPDK in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:02:20.739 Using 'verbs' RDMA provider 00:02:31.655 Configuring ISA-L (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal.log)...done. 00:02:43.872 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:02:43.872 Creating mk/config.mk...done. 00:02:43.872 Creating mk/cc.flags.mk...done. 00:02:43.872 Type 'make' to build. 00:02:43.872 09:17:54 -- spdk/autobuild.sh@69 -- $ run_test make make -j88 00:02:43.872 09:17:54 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:02:43.872 09:17:54 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:43.872 09:17:54 -- common/autotest_common.sh@10 -- $ set +x 00:02:43.872 ************************************ 00:02:43.872 START TEST make 00:02:43.872 ************************************ 00:02:43.872 09:17:54 make -- common/autotest_common.sh@1125 -- $ make -j88 00:02:43.872 make[1]: Nothing to be done for 'all'. 00:02:44.131 The Meson build system 00:02:44.132 Version: 1.3.1 00:02:44.132 Source dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/libvfio-user 00:02:44.132 Build dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:44.132 Build type: native build 00:02:44.132 Project name: libvfio-user 00:02:44.132 Project version: 0.0.1 00:02:44.132 C compiler for the host machine: clang-16 (clang 16.0.6 "clang version 16.0.6 (Fedora 16.0.6-3.fc38)") 00:02:44.132 C linker for the host machine: clang-16 ld.bfd 2.39-16 00:02:44.132 Host machine cpu family: x86_64 00:02:44.132 Host machine cpu: x86_64 00:02:44.132 Run-time dependency threads found: YES 00:02:44.132 Library dl found: YES 00:02:44.132 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:44.132 Run-time dependency json-c found: YES 0.17 00:02:44.132 Run-time dependency cmocka found: YES 1.1.7 00:02:44.132 Program pytest-3 found: NO 00:02:44.132 Program flake8 found: NO 00:02:44.132 Program misspell-fixer found: NO 00:02:44.132 Program restructuredtext-lint found: NO 00:02:44.132 Program valgrind found: YES (/usr/bin/valgrind) 00:02:44.132 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:44.132 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:44.132 Compiler for C supports arguments -Wwrite-strings: YES 00:02:44.132 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:44.132 Program test-lspci.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:02:44.132 Program test-linkage.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:02:44.132 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:44.132 Build targets in project: 8 00:02:44.132 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:02:44.132 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:02:44.132 00:02:44.132 libvfio-user 0.0.1 00:02:44.132 00:02:44.132 User defined options 00:02:44.132 buildtype : debug 00:02:44.132 default_library: static 00:02:44.132 libdir : /usr/local/lib 00:02:44.132 00:02:44.132 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:44.390 ninja: Entering directory `/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:44.390 [1/36] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:02:44.390 [2/36] Compiling C object samples/lspci.p/lspci.c.o 00:02:44.390 [3/36] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:02:44.390 [4/36] Compiling C object lib/libvfio-user.a.p/tran.c.o 00:02:44.390 [5/36] Compiling C object lib/libvfio-user.a.p/irq.c.o 00:02:44.390 [6/36] Compiling C object samples/null.p/null.c.o 00:02:44.390 [7/36] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:02:44.390 [8/36] Compiling C object lib/libvfio-user.a.p/migration.c.o 00:02:44.390 [9/36] Compiling C object samples/client.p/.._lib_tran.c.o 00:02:44.390 [10/36] Compiling C object samples/client.p/.._lib_migration.c.o 00:02:44.390 [11/36] Compiling C object lib/libvfio-user.a.p/pci.c.o 00:02:44.390 [12/36] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:02:44.390 [13/36] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:02:44.390 [14/36] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:02:44.390 [15/36] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:02:44.390 [16/36] Compiling C object test/unit_tests.p/mocks.c.o 00:02:44.390 [17/36] Compiling C object lib/libvfio-user.a.p/pci_caps.c.o 00:02:44.390 [18/36] Compiling C object lib/libvfio-user.a.p/tran_sock.c.o 00:02:44.390 [19/36] Compiling C object lib/libvfio-user.a.p/dma.c.o 00:02:44.390 [20/36] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:02:44.390 [21/36] Compiling C object samples/server.p/server.c.o 00:02:44.390 [22/36] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:02:44.647 [23/36] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:02:44.647 [24/36] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:02:44.647 [25/36] Compiling C object test/unit_tests.p/unit-tests.c.o 00:02:44.647 [26/36] Compiling C object samples/client.p/client.c.o 00:02:44.647 [27/36] Compiling C object lib/libvfio-user.a.p/libvfio-user.c.o 00:02:44.647 [28/36] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:02:44.647 [29/36] Linking target samples/client 00:02:44.647 [30/36] Linking static target lib/libvfio-user.a 00:02:44.647 [31/36] Linking target test/unit_tests 00:02:44.647 [32/36] Linking target samples/lspci 00:02:44.647 [33/36] Linking target samples/gpio-pci-idio-16 00:02:44.647 [34/36] Linking target samples/shadow_ioeventfd_server 00:02:44.647 [35/36] Linking target samples/null 00:02:44.647 [36/36] Linking target samples/server 00:02:44.647 INFO: autodetecting backend as ninja 00:02:44.647 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:44.647 DESTDIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:44.905 ninja: Entering directory `/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:44.905 ninja: no work to do. 00:02:50.168 The Meson build system 00:02:50.168 Version: 1.3.1 00:02:50.168 Source dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk 00:02:50.168 Build dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build-tmp 00:02:50.168 Build type: native build 00:02:50.168 Program cat found: YES (/usr/bin/cat) 00:02:50.168 Project name: DPDK 00:02:50.168 Project version: 24.03.0 00:02:50.168 C compiler for the host machine: clang-16 (clang 16.0.6 "clang version 16.0.6 (Fedora 16.0.6-3.fc38)") 00:02:50.168 C linker for the host machine: clang-16 ld.bfd 2.39-16 00:02:50.168 Host machine cpu family: x86_64 00:02:50.168 Host machine cpu: x86_64 00:02:50.168 Message: ## Building in Developer Mode ## 00:02:50.168 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:50.168 Program check-symbols.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:02:50.168 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:50.168 Program python3 found: YES (/usr/bin/python3) 00:02:50.168 Program cat found: YES (/usr/bin/cat) 00:02:50.168 Compiler for C supports arguments -march=native: YES 00:02:50.168 Checking for size of "void *" : 8 00:02:50.168 Checking for size of "void *" : 8 (cached) 00:02:50.168 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:02:50.168 Library m found: YES 00:02:50.168 Library numa found: YES 00:02:50.168 Has header "numaif.h" : YES 00:02:50.168 Library fdt found: NO 00:02:50.168 Library execinfo found: NO 00:02:50.168 Has header "execinfo.h" : YES 00:02:50.168 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:50.168 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:50.168 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:50.168 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:50.168 Run-time dependency openssl found: YES 3.0.9 00:02:50.168 Run-time dependency libpcap found: YES 1.10.4 00:02:50.168 Has header "pcap.h" with dependency libpcap: YES 00:02:50.168 Compiler for C supports arguments -Wcast-qual: YES 00:02:50.168 Compiler for C supports arguments -Wdeprecated: YES 00:02:50.168 Compiler for C supports arguments -Wformat: YES 00:02:50.168 Compiler for C supports arguments -Wformat-nonliteral: YES 00:02:50.168 Compiler for C supports arguments -Wformat-security: YES 00:02:50.168 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:50.168 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:50.168 Compiler for C supports arguments -Wnested-externs: YES 00:02:50.168 Compiler for C supports arguments -Wold-style-definition: YES 00:02:50.169 Compiler for C supports arguments -Wpointer-arith: YES 00:02:50.169 Compiler for C supports arguments -Wsign-compare: YES 00:02:50.169 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:50.169 Compiler for C supports arguments -Wundef: YES 00:02:50.169 Compiler for C supports arguments -Wwrite-strings: YES 00:02:50.169 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:50.169 Compiler for C supports arguments -Wno-packed-not-aligned: NO 00:02:50.169 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:50.169 Program objdump found: YES (/usr/bin/objdump) 00:02:50.169 Compiler for C supports arguments -mavx512f: YES 00:02:50.169 Checking if "AVX512 checking" compiles: YES 00:02:50.169 Fetching value of define "__SSE4_2__" : 1 00:02:50.169 Fetching value of define "__AES__" : 1 00:02:50.169 Fetching value of define "__AVX__" : 1 00:02:50.169 Fetching value of define "__AVX2__" : 1 00:02:50.169 Fetching value of define "__AVX512BW__" : 1 00:02:50.169 Fetching value of define "__AVX512CD__" : 1 00:02:50.169 Fetching value of define "__AVX512DQ__" : 1 00:02:50.169 Fetching value of define "__AVX512F__" : 1 00:02:50.169 Fetching value of define "__AVX512VL__" : 1 00:02:50.169 Fetching value of define "__PCLMUL__" : 1 00:02:50.169 Fetching value of define "__RDRND__" : 1 00:02:50.169 Fetching value of define "__RDSEED__" : 1 00:02:50.169 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:50.169 Fetching value of define "__znver1__" : (undefined) 00:02:50.169 Fetching value of define "__znver2__" : (undefined) 00:02:50.169 Fetching value of define "__znver3__" : (undefined) 00:02:50.169 Fetching value of define "__znver4__" : (undefined) 00:02:50.169 Compiler for C supports arguments -Wno-format-truncation: NO 00:02:50.169 Message: lib/log: Defining dependency "log" 00:02:50.169 Message: lib/kvargs: Defining dependency "kvargs" 00:02:50.169 Message: lib/telemetry: Defining dependency "telemetry" 00:02:50.169 Checking for function "getentropy" : NO 00:02:50.169 Message: lib/eal: Defining dependency "eal" 00:02:50.169 Message: lib/ring: Defining dependency "ring" 00:02:50.169 Message: lib/rcu: Defining dependency "rcu" 00:02:50.169 Message: lib/mempool: Defining dependency "mempool" 00:02:50.169 Message: lib/mbuf: Defining dependency "mbuf" 00:02:50.169 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:50.169 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:50.169 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:50.169 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:50.169 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:50.169 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:02:50.169 Compiler for C supports arguments -mpclmul: YES 00:02:50.169 Compiler for C supports arguments -maes: YES 00:02:50.169 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:50.169 Compiler for C supports arguments -mavx512bw: YES 00:02:50.169 Compiler for C supports arguments -mavx512dq: YES 00:02:50.169 Compiler for C supports arguments -mavx512vl: YES 00:02:50.169 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:50.169 Compiler for C supports arguments -mavx2: YES 00:02:50.169 Compiler for C supports arguments -mavx: YES 00:02:50.169 Message: lib/net: Defining dependency "net" 00:02:50.169 Message: lib/meter: Defining dependency "meter" 00:02:50.169 Message: lib/ethdev: Defining dependency "ethdev" 00:02:50.169 Message: lib/pci: Defining dependency "pci" 00:02:50.169 Message: lib/cmdline: Defining dependency "cmdline" 00:02:50.169 Message: lib/hash: Defining dependency "hash" 00:02:50.169 Message: lib/timer: Defining dependency "timer" 00:02:50.169 Message: lib/compressdev: Defining dependency "compressdev" 00:02:50.169 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:50.169 Message: lib/dmadev: Defining dependency "dmadev" 00:02:50.169 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:50.169 Message: lib/power: Defining dependency "power" 00:02:50.169 Message: lib/reorder: Defining dependency "reorder" 00:02:50.169 Message: lib/security: Defining dependency "security" 00:02:50.169 Has header "linux/userfaultfd.h" : YES 00:02:50.169 Has header "linux/vduse.h" : YES 00:02:50.169 Message: lib/vhost: Defining dependency "vhost" 00:02:50.169 Compiler for C supports arguments -Wno-format-truncation: NO (cached) 00:02:50.169 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:50.169 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:50.169 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:50.169 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:50.169 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:50.169 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:50.169 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:50.169 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:50.169 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:50.169 Program doxygen found: YES (/usr/bin/doxygen) 00:02:50.169 Configuring doxy-api-html.conf using configuration 00:02:50.169 Configuring doxy-api-man.conf using configuration 00:02:50.169 Program mandb found: YES (/usr/bin/mandb) 00:02:50.169 Program sphinx-build found: NO 00:02:50.169 Configuring rte_build_config.h using configuration 00:02:50.169 Message: 00:02:50.169 ================= 00:02:50.169 Applications Enabled 00:02:50.169 ================= 00:02:50.169 00:02:50.169 apps: 00:02:50.169 00:02:50.169 00:02:50.169 Message: 00:02:50.169 ================= 00:02:50.169 Libraries Enabled 00:02:50.169 ================= 00:02:50.169 00:02:50.169 libs: 00:02:50.169 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:50.169 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:50.169 cryptodev, dmadev, power, reorder, security, vhost, 00:02:50.169 00:02:50.169 Message: 00:02:50.169 =============== 00:02:50.169 Drivers Enabled 00:02:50.169 =============== 00:02:50.169 00:02:50.169 common: 00:02:50.169 00:02:50.169 bus: 00:02:50.169 pci, vdev, 00:02:50.169 mempool: 00:02:50.169 ring, 00:02:50.169 dma: 00:02:50.169 00:02:50.169 net: 00:02:50.169 00:02:50.169 crypto: 00:02:50.169 00:02:50.169 compress: 00:02:50.169 00:02:50.169 vdpa: 00:02:50.169 00:02:50.169 00:02:50.169 Message: 00:02:50.169 ================= 00:02:50.169 Content Skipped 00:02:50.169 ================= 00:02:50.169 00:02:50.169 apps: 00:02:50.169 dumpcap: explicitly disabled via build config 00:02:50.169 graph: explicitly disabled via build config 00:02:50.169 pdump: explicitly disabled via build config 00:02:50.169 proc-info: explicitly disabled via build config 00:02:50.169 test-acl: explicitly disabled via build config 00:02:50.169 test-bbdev: explicitly disabled via build config 00:02:50.169 test-cmdline: explicitly disabled via build config 00:02:50.169 test-compress-perf: explicitly disabled via build config 00:02:50.169 test-crypto-perf: explicitly disabled via build config 00:02:50.169 test-dma-perf: explicitly disabled via build config 00:02:50.169 test-eventdev: explicitly disabled via build config 00:02:50.169 test-fib: explicitly disabled via build config 00:02:50.169 test-flow-perf: explicitly disabled via build config 00:02:50.169 test-gpudev: explicitly disabled via build config 00:02:50.169 test-mldev: explicitly disabled via build config 00:02:50.169 test-pipeline: explicitly disabled via build config 00:02:50.169 test-pmd: explicitly disabled via build config 00:02:50.169 test-regex: explicitly disabled via build config 00:02:50.169 test-sad: explicitly disabled via build config 00:02:50.169 test-security-perf: explicitly disabled via build config 00:02:50.169 00:02:50.169 libs: 00:02:50.169 argparse: explicitly disabled via build config 00:02:50.169 metrics: explicitly disabled via build config 00:02:50.169 acl: explicitly disabled via build config 00:02:50.169 bbdev: explicitly disabled via build config 00:02:50.169 bitratestats: explicitly disabled via build config 00:02:50.169 bpf: explicitly disabled via build config 00:02:50.169 cfgfile: explicitly disabled via build config 00:02:50.169 distributor: explicitly disabled via build config 00:02:50.169 efd: explicitly disabled via build config 00:02:50.169 eventdev: explicitly disabled via build config 00:02:50.169 dispatcher: explicitly disabled via build config 00:02:50.169 gpudev: explicitly disabled via build config 00:02:50.169 gro: explicitly disabled via build config 00:02:50.169 gso: explicitly disabled via build config 00:02:50.169 ip_frag: explicitly disabled via build config 00:02:50.169 jobstats: explicitly disabled via build config 00:02:50.169 latencystats: explicitly disabled via build config 00:02:50.169 lpm: explicitly disabled via build config 00:02:50.169 member: explicitly disabled via build config 00:02:50.169 pcapng: explicitly disabled via build config 00:02:50.169 rawdev: explicitly disabled via build config 00:02:50.169 regexdev: explicitly disabled via build config 00:02:50.169 mldev: explicitly disabled via build config 00:02:50.169 rib: explicitly disabled via build config 00:02:50.169 sched: explicitly disabled via build config 00:02:50.169 stack: explicitly disabled via build config 00:02:50.169 ipsec: explicitly disabled via build config 00:02:50.169 pdcp: explicitly disabled via build config 00:02:50.169 fib: explicitly disabled via build config 00:02:50.169 port: explicitly disabled via build config 00:02:50.169 pdump: explicitly disabled via build config 00:02:50.169 table: explicitly disabled via build config 00:02:50.169 pipeline: explicitly disabled via build config 00:02:50.169 graph: explicitly disabled via build config 00:02:50.169 node: explicitly disabled via build config 00:02:50.169 00:02:50.169 drivers: 00:02:50.169 common/cpt: not in enabled drivers build config 00:02:50.169 common/dpaax: not in enabled drivers build config 00:02:50.169 common/iavf: not in enabled drivers build config 00:02:50.169 common/idpf: not in enabled drivers build config 00:02:50.169 common/ionic: not in enabled drivers build config 00:02:50.169 common/mvep: not in enabled drivers build config 00:02:50.169 common/octeontx: not in enabled drivers build config 00:02:50.169 bus/auxiliary: not in enabled drivers build config 00:02:50.170 bus/cdx: not in enabled drivers build config 00:02:50.170 bus/dpaa: not in enabled drivers build config 00:02:50.170 bus/fslmc: not in enabled drivers build config 00:02:50.170 bus/ifpga: not in enabled drivers build config 00:02:50.170 bus/platform: not in enabled drivers build config 00:02:50.170 bus/uacce: not in enabled drivers build config 00:02:50.170 bus/vmbus: not in enabled drivers build config 00:02:50.170 common/cnxk: not in enabled drivers build config 00:02:50.170 common/mlx5: not in enabled drivers build config 00:02:50.170 common/nfp: not in enabled drivers build config 00:02:50.170 common/nitrox: not in enabled drivers build config 00:02:50.170 common/qat: not in enabled drivers build config 00:02:50.170 common/sfc_efx: not in enabled drivers build config 00:02:50.170 mempool/bucket: not in enabled drivers build config 00:02:50.170 mempool/cnxk: not in enabled drivers build config 00:02:50.170 mempool/dpaa: not in enabled drivers build config 00:02:50.170 mempool/dpaa2: not in enabled drivers build config 00:02:50.170 mempool/octeontx: not in enabled drivers build config 00:02:50.170 mempool/stack: not in enabled drivers build config 00:02:50.170 dma/cnxk: not in enabled drivers build config 00:02:50.170 dma/dpaa: not in enabled drivers build config 00:02:50.170 dma/dpaa2: not in enabled drivers build config 00:02:50.170 dma/hisilicon: not in enabled drivers build config 00:02:50.170 dma/idxd: not in enabled drivers build config 00:02:50.170 dma/ioat: not in enabled drivers build config 00:02:50.170 dma/skeleton: not in enabled drivers build config 00:02:50.170 net/af_packet: not in enabled drivers build config 00:02:50.170 net/af_xdp: not in enabled drivers build config 00:02:50.170 net/ark: not in enabled drivers build config 00:02:50.170 net/atlantic: not in enabled drivers build config 00:02:50.170 net/avp: not in enabled drivers build config 00:02:50.170 net/axgbe: not in enabled drivers build config 00:02:50.170 net/bnx2x: not in enabled drivers build config 00:02:50.170 net/bnxt: not in enabled drivers build config 00:02:50.170 net/bonding: not in enabled drivers build config 00:02:50.170 net/cnxk: not in enabled drivers build config 00:02:50.170 net/cpfl: not in enabled drivers build config 00:02:50.170 net/cxgbe: not in enabled drivers build config 00:02:50.170 net/dpaa: not in enabled drivers build config 00:02:50.170 net/dpaa2: not in enabled drivers build config 00:02:50.170 net/e1000: not in enabled drivers build config 00:02:50.170 net/ena: not in enabled drivers build config 00:02:50.170 net/enetc: not in enabled drivers build config 00:02:50.170 net/enetfec: not in enabled drivers build config 00:02:50.170 net/enic: not in enabled drivers build config 00:02:50.170 net/failsafe: not in enabled drivers build config 00:02:50.170 net/fm10k: not in enabled drivers build config 00:02:50.170 net/gve: not in enabled drivers build config 00:02:50.170 net/hinic: not in enabled drivers build config 00:02:50.170 net/hns3: not in enabled drivers build config 00:02:50.170 net/i40e: not in enabled drivers build config 00:02:50.170 net/iavf: not in enabled drivers build config 00:02:50.170 net/ice: not in enabled drivers build config 00:02:50.170 net/idpf: not in enabled drivers build config 00:02:50.170 net/igc: not in enabled drivers build config 00:02:50.170 net/ionic: not in enabled drivers build config 00:02:50.170 net/ipn3ke: not in enabled drivers build config 00:02:50.170 net/ixgbe: not in enabled drivers build config 00:02:50.170 net/mana: not in enabled drivers build config 00:02:50.170 net/memif: not in enabled drivers build config 00:02:50.170 net/mlx4: not in enabled drivers build config 00:02:50.170 net/mlx5: not in enabled drivers build config 00:02:50.170 net/mvneta: not in enabled drivers build config 00:02:50.170 net/mvpp2: not in enabled drivers build config 00:02:50.170 net/netvsc: not in enabled drivers build config 00:02:50.170 net/nfb: not in enabled drivers build config 00:02:50.170 net/nfp: not in enabled drivers build config 00:02:50.170 net/ngbe: not in enabled drivers build config 00:02:50.170 net/null: not in enabled drivers build config 00:02:50.170 net/octeontx: not in enabled drivers build config 00:02:50.170 net/octeon_ep: not in enabled drivers build config 00:02:50.170 net/pcap: not in enabled drivers build config 00:02:50.170 net/pfe: not in enabled drivers build config 00:02:50.170 net/qede: not in enabled drivers build config 00:02:50.170 net/ring: not in enabled drivers build config 00:02:50.170 net/sfc: not in enabled drivers build config 00:02:50.170 net/softnic: not in enabled drivers build config 00:02:50.170 net/tap: not in enabled drivers build config 00:02:50.170 net/thunderx: not in enabled drivers build config 00:02:50.170 net/txgbe: not in enabled drivers build config 00:02:50.170 net/vdev_netvsc: not in enabled drivers build config 00:02:50.170 net/vhost: not in enabled drivers build config 00:02:50.170 net/virtio: not in enabled drivers build config 00:02:50.170 net/vmxnet3: not in enabled drivers build config 00:02:50.170 raw/*: missing internal dependency, "rawdev" 00:02:50.170 crypto/armv8: not in enabled drivers build config 00:02:50.170 crypto/bcmfs: not in enabled drivers build config 00:02:50.170 crypto/caam_jr: not in enabled drivers build config 00:02:50.170 crypto/ccp: not in enabled drivers build config 00:02:50.170 crypto/cnxk: not in enabled drivers build config 00:02:50.170 crypto/dpaa_sec: not in enabled drivers build config 00:02:50.170 crypto/dpaa2_sec: not in enabled drivers build config 00:02:50.170 crypto/ipsec_mb: not in enabled drivers build config 00:02:50.170 crypto/mlx5: not in enabled drivers build config 00:02:50.170 crypto/mvsam: not in enabled drivers build config 00:02:50.170 crypto/nitrox: not in enabled drivers build config 00:02:50.170 crypto/null: not in enabled drivers build config 00:02:50.170 crypto/octeontx: not in enabled drivers build config 00:02:50.170 crypto/openssl: not in enabled drivers build config 00:02:50.170 crypto/scheduler: not in enabled drivers build config 00:02:50.170 crypto/uadk: not in enabled drivers build config 00:02:50.170 crypto/virtio: not in enabled drivers build config 00:02:50.170 compress/isal: not in enabled drivers build config 00:02:50.170 compress/mlx5: not in enabled drivers build config 00:02:50.170 compress/nitrox: not in enabled drivers build config 00:02:50.170 compress/octeontx: not in enabled drivers build config 00:02:50.170 compress/zlib: not in enabled drivers build config 00:02:50.170 regex/*: missing internal dependency, "regexdev" 00:02:50.170 ml/*: missing internal dependency, "mldev" 00:02:50.170 vdpa/ifc: not in enabled drivers build config 00:02:50.170 vdpa/mlx5: not in enabled drivers build config 00:02:50.170 vdpa/nfp: not in enabled drivers build config 00:02:50.170 vdpa/sfc: not in enabled drivers build config 00:02:50.170 event/*: missing internal dependency, "eventdev" 00:02:50.170 baseband/*: missing internal dependency, "bbdev" 00:02:50.170 gpu/*: missing internal dependency, "gpudev" 00:02:50.170 00:02:50.170 00:02:50.428 Build targets in project: 85 00:02:50.428 00:02:50.428 DPDK 24.03.0 00:02:50.428 00:02:50.428 User defined options 00:02:50.428 buildtype : debug 00:02:50.428 default_library : static 00:02:50.428 libdir : lib 00:02:50.428 prefix : /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:02:50.428 c_args : -fPIC -Werror 00:02:50.428 c_link_args : 00:02:50.428 cpu_instruction_set: native 00:02:50.428 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:50.428 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:50.428 enable_docs : false 00:02:50.428 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:50.428 enable_kmods : false 00:02:50.428 max_lcores : 128 00:02:50.428 tests : false 00:02:50.428 00:02:50.428 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:50.691 ninja: Entering directory `/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build-tmp' 00:02:50.962 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:50.962 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:50.962 [3/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:50.962 [4/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:50.962 [5/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:50.962 [6/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:50.962 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:50.962 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:50.962 [9/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:50.962 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:50.962 [11/268] Linking static target lib/librte_kvargs.a 00:02:50.962 [12/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:50.962 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:50.962 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:50.962 [15/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:50.962 [16/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:50.962 [17/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:50.962 [18/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:50.962 [19/268] Linking static target lib/librte_log.a 00:02:51.220 [20/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.481 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:51.481 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:51.481 [23/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:51.481 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:51.481 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:51.481 [26/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:51.481 [27/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:51.481 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:51.481 [29/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:51.481 [30/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:51.481 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:51.481 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:51.481 [33/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:51.481 [34/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:51.481 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:51.481 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:51.481 [37/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:51.481 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:51.481 [39/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:51.481 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:51.481 [41/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:51.481 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:51.481 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:51.481 [44/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:51.481 [45/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:51.481 [46/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:51.481 [47/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:51.481 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:51.481 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:51.481 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:51.481 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:51.481 [52/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:51.481 [53/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:51.481 [54/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:51.481 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:51.481 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:51.481 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:51.481 [58/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:51.481 [59/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:51.481 [60/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:51.481 [61/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:51.481 [62/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:51.481 [63/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:51.481 [64/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:51.481 [65/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:51.481 [66/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:51.481 [67/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:51.481 [68/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:51.481 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:51.481 [70/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:51.481 [71/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:51.481 [72/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:51.481 [73/268] Linking static target lib/librte_telemetry.a 00:02:51.481 [74/268] Linking static target lib/librte_pci.a 00:02:51.481 [75/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:51.481 [76/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:51.481 [77/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:51.481 [78/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:51.481 [79/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:51.481 [80/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:51.481 [81/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:51.481 [82/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:51.481 [83/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:51.481 [84/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:51.481 [85/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:51.481 [86/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:51.481 [87/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:51.481 [88/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:51.481 [89/268] Linking static target lib/librte_meter.a 00:02:51.481 [90/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:51.481 [91/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:51.481 [92/268] Linking static target lib/librte_ring.a 00:02:51.481 [93/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:51.481 [94/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:51.481 [95/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:51.481 [96/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:51.481 [97/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:51.481 [98/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:51.481 [99/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:51.481 [100/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:51.481 [101/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:51.481 [102/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:51.481 [103/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:51.481 [104/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:51.481 [105/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.481 [106/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:51.481 [107/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:51.740 [108/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:51.740 [109/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:51.740 [110/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:51.740 [111/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:51.740 [112/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:51.740 [113/268] Linking static target lib/librte_eal.a 00:02:51.740 [114/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:51.740 [115/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:51.740 [116/268] Linking target lib/librte_log.so.24.1 00:02:51.740 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:51.740 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:51.740 [119/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:51.740 [120/268] Linking static target lib/librte_mempool.a 00:02:51.740 [121/268] Linking static target lib/librte_net.a 00:02:51.740 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:51.740 [123/268] Linking static target lib/librte_rcu.a 00:02:51.740 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:51.740 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:51.740 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:51.740 [127/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.740 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:51.740 [129/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:51.740 [130/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.740 [131/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:51.740 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:51.740 [133/268] Linking static target lib/librte_mbuf.a 00:02:51.740 [134/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.740 [135/268] Linking target lib/librte_kvargs.so.24.1 00:02:51.740 [136/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:51.740 [137/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:51.999 [138/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:51.999 [139/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:51.999 [140/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:51.999 [141/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:51.999 [142/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:51.999 [143/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:51.999 [144/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.999 [145/268] Linking static target lib/librte_timer.a 00:02:51.999 [146/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.999 [147/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:51.999 [148/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:51.999 [149/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.999 [150/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:51.999 [151/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:51.999 [152/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:51.999 [153/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:51.999 [154/268] Linking target lib/librte_telemetry.so.24.1 00:02:51.999 [155/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:51.999 [156/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:51.999 [157/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:51.999 [158/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:51.999 [159/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:51.999 [160/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:51.999 [161/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:51.999 [162/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:51.999 [163/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:51.999 [164/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:51.999 [165/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:51.999 [166/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:51.999 [167/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:51.999 [168/268] Linking static target lib/librte_reorder.a 00:02:51.999 [169/268] Linking static target lib/librte_cmdline.a 00:02:51.999 [170/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:51.999 [171/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:51.999 [172/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:51.999 [173/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:51.999 [174/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:51.999 [175/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:51.999 [176/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:52.258 [177/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:52.258 [178/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:52.258 [179/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:52.258 [180/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:52.258 [181/268] Linking static target lib/librte_power.a 00:02:52.258 [182/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:52.258 [183/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:52.258 [184/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:52.258 [185/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:52.258 [186/268] Linking static target lib/librte_dmadev.a 00:02:52.258 [187/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:52.258 [188/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:52.258 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:52.258 [190/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:52.258 [191/268] Linking static target lib/librte_compressdev.a 00:02:52.258 [192/268] Linking static target drivers/librte_bus_vdev.a 00:02:52.258 [193/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:52.258 [194/268] Linking static target lib/librte_hash.a 00:02:52.258 [195/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:52.258 [196/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:52.258 [197/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:52.258 [198/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:52.258 [199/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:52.258 [200/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:52.258 [201/268] Linking static target lib/librte_security.a 00:02:52.258 [202/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.258 [203/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:52.258 [204/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:52.258 [205/268] Linking static target lib/librte_ethdev.a 00:02:52.258 [206/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.517 [207/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:52.517 [208/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:52.517 [209/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.517 [210/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:52.517 [211/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:52.517 [212/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:52.517 [213/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:52.517 [214/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:52.517 [215/268] Linking static target drivers/librte_mempool_ring.a 00:02:52.517 [216/268] Linking static target drivers/librte_bus_pci.a 00:02:52.517 [217/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.517 [218/268] Linking static target lib/librte_cryptodev.a 00:02:52.517 [219/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.517 [220/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:52.776 [221/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.776 [222/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.776 [223/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.035 [224/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.035 [225/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.294 [226/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.294 [227/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.294 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:53.294 [229/268] Linking static target lib/librte_vhost.a 00:02:54.230 [230/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.167 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.448 [232/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.724 [233/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.724 [234/268] Linking target lib/librte_eal.so.24.1 00:03:00.986 [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:03:00.986 [236/268] Linking target lib/librte_ring.so.24.1 00:03:00.986 [237/268] Linking target drivers/librte_bus_vdev.so.24.1 00:03:00.986 [238/268] Linking target lib/librte_pci.so.24.1 00:03:00.986 [239/268] Linking target lib/librte_timer.so.24.1 00:03:00.986 [240/268] Linking target lib/librte_meter.so.24.1 00:03:00.986 [241/268] Linking target lib/librte_dmadev.so.24.1 00:03:00.986 [242/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:03:00.986 [243/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:03:00.986 [244/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:03:00.986 [245/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:03:01.246 [246/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:03:01.246 [247/268] Linking target lib/librte_mempool.so.24.1 00:03:01.246 [248/268] Linking target drivers/librte_bus_pci.so.24.1 00:03:01.246 [249/268] Linking target lib/librte_rcu.so.24.1 00:03:01.246 [250/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:03:01.246 [251/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:03:01.246 [252/268] Linking target lib/librte_mbuf.so.24.1 00:03:01.246 [253/268] Linking target drivers/librte_mempool_ring.so.24.1 00:03:01.507 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:03:01.507 [255/268] Linking target lib/librte_cryptodev.so.24.1 00:03:01.507 [256/268] Linking target lib/librte_compressdev.so.24.1 00:03:01.507 [257/268] Linking target lib/librte_reorder.so.24.1 00:03:01.507 [258/268] Linking target lib/librte_net.so.24.1 00:03:01.507 [259/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:03:01.766 [260/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:03:01.766 [261/268] Linking target lib/librte_hash.so.24.1 00:03:01.766 [262/268] Linking target lib/librte_security.so.24.1 00:03:01.766 [263/268] Linking target lib/librte_cmdline.so.24.1 00:03:01.766 [264/268] Linking target lib/librte_ethdev.so.24.1 00:03:01.766 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:03:01.766 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:03:02.026 [267/268] Linking target lib/librte_power.so.24.1 00:03:02.026 [268/268] Linking target lib/librte_vhost.so.24.1 00:03:02.026 INFO: autodetecting backend as ninja 00:03:02.026 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build-tmp -j 88 00:03:02.595 CC lib/ut/ut.o 00:03:02.853 CC lib/log/log.o 00:03:02.853 CC lib/log/log_flags.o 00:03:02.853 CC lib/log/log_deprecated.o 00:03:02.853 CC lib/ut_mock/mock.o 00:03:02.853 LIB libspdk_log.a 00:03:02.853 LIB libspdk_ut.a 00:03:02.853 LIB libspdk_ut_mock.a 00:03:03.111 CC lib/util/base64.o 00:03:03.111 CC lib/util/cpuset.o 00:03:03.111 CC lib/util/bit_array.o 00:03:03.111 CC lib/util/crc32.o 00:03:03.111 CC lib/util/crc16.o 00:03:03.111 CXX lib/trace_parser/trace.o 00:03:03.112 CC lib/util/crc32c.o 00:03:03.112 CC lib/util/crc32_ieee.o 00:03:03.112 CC lib/util/fd.o 00:03:03.112 CC lib/util/crc64.o 00:03:03.112 CC lib/util/dif.o 00:03:03.112 CC lib/util/fd_group.o 00:03:03.112 CC lib/util/file.o 00:03:03.112 CC lib/util/hexlify.o 00:03:03.112 CC lib/util/iov.o 00:03:03.112 CC lib/util/math.o 00:03:03.112 CC lib/util/net.o 00:03:03.112 CC lib/util/strerror_tls.o 00:03:03.112 CC lib/util/pipe.o 00:03:03.112 CC lib/dma/dma.o 00:03:03.112 CC lib/util/string.o 00:03:03.112 CC lib/util/xor.o 00:03:03.112 CC lib/util/uuid.o 00:03:03.112 CC lib/util/zipf.o 00:03:03.112 CC lib/ioat/ioat.o 00:03:03.370 CC lib/vfio_user/host/vfio_user_pci.o 00:03:03.370 CC lib/vfio_user/host/vfio_user.o 00:03:03.370 LIB libspdk_dma.a 00:03:03.370 LIB libspdk_ioat.a 00:03:03.370 LIB libspdk_vfio_user.a 00:03:03.370 LIB libspdk_util.a 00:03:03.629 LIB libspdk_trace_parser.a 00:03:03.629 CC lib/rdma_provider/common.o 00:03:03.629 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:03.629 CC lib/conf/conf.o 00:03:03.629 CC lib/vmd/vmd.o 00:03:03.629 CC lib/vmd/led.o 00:03:03.629 CC lib/json/json_parse.o 00:03:03.629 CC lib/json/json_write.o 00:03:03.629 CC lib/json/json_util.o 00:03:03.629 CC lib/env_dpdk/env.o 00:03:03.629 CC lib/rdma_utils/rdma_utils.o 00:03:03.629 CC lib/env_dpdk/memory.o 00:03:03.629 CC lib/env_dpdk/pci.o 00:03:03.629 CC lib/idxd/idxd.o 00:03:03.629 CC lib/env_dpdk/init.o 00:03:03.629 CC lib/idxd/idxd_user.o 00:03:03.629 CC lib/env_dpdk/threads.o 00:03:03.629 CC lib/idxd/idxd_kernel.o 00:03:03.629 CC lib/env_dpdk/pci_ioat.o 00:03:03.629 CC lib/env_dpdk/pci_virtio.o 00:03:03.629 CC lib/env_dpdk/pci_vmd.o 00:03:03.629 CC lib/env_dpdk/pci_idxd.o 00:03:03.629 CC lib/env_dpdk/pci_event.o 00:03:03.629 CC lib/env_dpdk/sigbus_handler.o 00:03:03.629 CC lib/env_dpdk/pci_dpdk.o 00:03:03.629 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:03.629 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:03.888 LIB libspdk_rdma_provider.a 00:03:03.888 LIB libspdk_conf.a 00:03:03.888 LIB libspdk_rdma_utils.a 00:03:03.888 LIB libspdk_json.a 00:03:04.147 LIB libspdk_idxd.a 00:03:04.147 LIB libspdk_vmd.a 00:03:04.147 CC lib/jsonrpc/jsonrpc_server.o 00:03:04.147 CC lib/jsonrpc/jsonrpc_client.o 00:03:04.147 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:04.147 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:04.406 LIB libspdk_jsonrpc.a 00:03:04.665 CC lib/rpc/rpc.o 00:03:04.665 LIB libspdk_env_dpdk.a 00:03:04.665 LIB libspdk_rpc.a 00:03:05.233 CC lib/notify/notify.o 00:03:05.233 CC lib/trace/trace_flags.o 00:03:05.233 CC lib/trace/trace.o 00:03:05.233 CC lib/notify/notify_rpc.o 00:03:05.233 CC lib/trace/trace_rpc.o 00:03:05.233 CC lib/keyring/keyring.o 00:03:05.233 CC lib/keyring/keyring_rpc.o 00:03:05.233 LIB libspdk_notify.a 00:03:05.233 LIB libspdk_trace.a 00:03:05.233 LIB libspdk_keyring.a 00:03:05.492 CC lib/sock/sock_rpc.o 00:03:05.492 CC lib/sock/sock.o 00:03:05.492 CC lib/thread/thread.o 00:03:05.492 CC lib/thread/iobuf.o 00:03:05.751 LIB libspdk_sock.a 00:03:06.011 CC lib/nvme/nvme_fabric.o 00:03:06.011 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:06.011 CC lib/nvme/nvme_ctrlr.o 00:03:06.011 CC lib/nvme/nvme_ns.o 00:03:06.011 CC lib/nvme/nvme_ns_cmd.o 00:03:06.011 CC lib/nvme/nvme_pcie_common.o 00:03:06.011 CC lib/nvme/nvme_pcie.o 00:03:06.011 CC lib/nvme/nvme_qpair.o 00:03:06.011 CC lib/nvme/nvme.o 00:03:06.011 CC lib/nvme/nvme_quirks.o 00:03:06.011 CC lib/nvme/nvme_transport.o 00:03:06.011 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:06.011 CC lib/nvme/nvme_discovery.o 00:03:06.011 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:06.011 CC lib/nvme/nvme_opal.o 00:03:06.011 CC lib/nvme/nvme_io_msg.o 00:03:06.011 CC lib/nvme/nvme_tcp.o 00:03:06.011 CC lib/nvme/nvme_poll_group.o 00:03:06.011 CC lib/nvme/nvme_zns.o 00:03:06.011 CC lib/nvme/nvme_stubs.o 00:03:06.011 CC lib/nvme/nvme_auth.o 00:03:06.011 CC lib/nvme/nvme_cuse.o 00:03:06.011 CC lib/nvme/nvme_vfio_user.o 00:03:06.011 CC lib/nvme/nvme_rdma.o 00:03:06.269 LIB libspdk_thread.a 00:03:06.528 CC lib/accel/accel.o 00:03:06.528 CC lib/accel/accel_sw.o 00:03:06.528 CC lib/accel/accel_rpc.o 00:03:06.528 CC lib/blob/blobstore.o 00:03:06.528 CC lib/vfu_tgt/tgt_endpoint.o 00:03:06.528 CC lib/blob/request.o 00:03:06.528 CC lib/blob/zeroes.o 00:03:06.528 CC lib/vfu_tgt/tgt_rpc.o 00:03:06.528 CC lib/blob/blob_bs_dev.o 00:03:06.528 CC lib/init/subsystem.o 00:03:06.528 CC lib/virtio/virtio.o 00:03:06.528 CC lib/init/json_config.o 00:03:06.528 CC lib/virtio/virtio_vhost_user.o 00:03:06.528 CC lib/init/subsystem_rpc.o 00:03:06.528 CC lib/virtio/virtio_vfio_user.o 00:03:06.528 CC lib/init/rpc.o 00:03:06.528 CC lib/virtio/virtio_pci.o 00:03:06.788 LIB libspdk_init.a 00:03:06.788 LIB libspdk_vfu_tgt.a 00:03:06.788 LIB libspdk_virtio.a 00:03:07.047 CC lib/event/app.o 00:03:07.047 CC lib/event/reactor.o 00:03:07.047 CC lib/event/app_rpc.o 00:03:07.047 CC lib/event/log_rpc.o 00:03:07.047 CC lib/event/scheduler_static.o 00:03:07.307 LIB libspdk_accel.a 00:03:07.307 LIB libspdk_event.a 00:03:07.307 LIB libspdk_nvme.a 00:03:07.566 CC lib/bdev/bdev_rpc.o 00:03:07.566 CC lib/bdev/bdev.o 00:03:07.566 CC lib/bdev/part.o 00:03:07.566 CC lib/bdev/bdev_zone.o 00:03:07.566 CC lib/bdev/scsi_nvme.o 00:03:08.503 LIB libspdk_blob.a 00:03:08.503 CC lib/lvol/lvol.o 00:03:08.503 CC lib/blobfs/blobfs.o 00:03:08.503 CC lib/blobfs/tree.o 00:03:09.070 LIB libspdk_lvol.a 00:03:09.071 LIB libspdk_blobfs.a 00:03:09.071 LIB libspdk_bdev.a 00:03:09.638 CC lib/ftl/ftl_core.o 00:03:09.638 CC lib/ftl/ftl_init.o 00:03:09.638 CC lib/ftl/ftl_layout.o 00:03:09.638 CC lib/nvmf/ctrlr.o 00:03:09.638 CC lib/ftl/ftl_debug.o 00:03:09.638 CC lib/ftl/ftl_io.o 00:03:09.638 CC lib/scsi/dev.o 00:03:09.638 CC lib/nvmf/ctrlr_discovery.o 00:03:09.638 CC lib/ftl/ftl_sb.o 00:03:09.638 CC lib/ftl/ftl_l2p.o 00:03:09.638 CC lib/nvmf/ctrlr_bdev.o 00:03:09.638 CC lib/nvmf/subsystem.o 00:03:09.638 CC lib/scsi/lun.o 00:03:09.638 CC lib/ftl/ftl_l2p_flat.o 00:03:09.638 CC lib/nvmf/nvmf.o 00:03:09.638 CC lib/scsi/port.o 00:03:09.638 CC lib/ftl/ftl_nv_cache.o 00:03:09.638 CC lib/nvmf/nvmf_rpc.o 00:03:09.638 CC lib/scsi/scsi.o 00:03:09.638 CC lib/ftl/ftl_band.o 00:03:09.638 CC lib/nvmf/transport.o 00:03:09.638 CC lib/scsi/scsi_bdev.o 00:03:09.638 CC lib/nvmf/tcp.o 00:03:09.638 CC lib/scsi/scsi_rpc.o 00:03:09.638 CC lib/ftl/ftl_band_ops.o 00:03:09.638 CC lib/scsi/scsi_pr.o 00:03:09.638 CC lib/ftl/ftl_writer.o 00:03:09.638 CC lib/nvmf/stubs.o 00:03:09.638 CC lib/ftl/ftl_rq.o 00:03:09.638 CC lib/nvmf/mdns_server.o 00:03:09.638 CC lib/ftl/ftl_reloc.o 00:03:09.638 CC lib/scsi/task.o 00:03:09.638 CC lib/nvmf/vfio_user.o 00:03:09.638 CC lib/ftl/ftl_l2p_cache.o 00:03:09.638 CC lib/nvmf/rdma.o 00:03:09.638 CC lib/ftl/ftl_p2l.o 00:03:09.638 CC lib/ublk/ublk.o 00:03:09.638 CC lib/nvmf/auth.o 00:03:09.638 CC lib/ftl/mngt/ftl_mngt.o 00:03:09.638 CC lib/ublk/ublk_rpc.o 00:03:09.638 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:09.638 CC lib/nbd/nbd.o 00:03:09.638 CC lib/nbd/nbd_rpc.o 00:03:09.638 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:09.638 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:09.638 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:09.638 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:09.638 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:09.638 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:09.638 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:09.638 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:09.638 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:09.638 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:09.638 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:09.638 CC lib/ftl/utils/ftl_conf.o 00:03:09.638 CC lib/ftl/utils/ftl_md.o 00:03:09.638 CC lib/ftl/utils/ftl_mempool.o 00:03:09.638 CC lib/ftl/utils/ftl_bitmap.o 00:03:09.638 CC lib/ftl/utils/ftl_property.o 00:03:09.638 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:09.638 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:09.638 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:09.638 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:09.638 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:09.638 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:09.638 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:09.638 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:09.638 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:09.638 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:09.638 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:09.638 CC lib/ftl/base/ftl_base_dev.o 00:03:09.638 CC lib/ftl/base/ftl_base_bdev.o 00:03:09.638 CC lib/ftl/ftl_trace.o 00:03:09.896 LIB libspdk_scsi.a 00:03:09.896 LIB libspdk_nbd.a 00:03:10.155 LIB libspdk_ublk.a 00:03:10.155 CC lib/iscsi/init_grp.o 00:03:10.155 CC lib/iscsi/conn.o 00:03:10.155 CC lib/iscsi/md5.o 00:03:10.155 CC lib/iscsi/iscsi.o 00:03:10.155 CC lib/iscsi/param.o 00:03:10.155 CC lib/iscsi/portal_grp.o 00:03:10.155 CC lib/iscsi/iscsi_subsystem.o 00:03:10.155 CC lib/iscsi/tgt_node.o 00:03:10.155 CC lib/iscsi/iscsi_rpc.o 00:03:10.155 CC lib/iscsi/task.o 00:03:10.155 CC lib/vhost/vhost.o 00:03:10.155 CC lib/vhost/vhost_scsi.o 00:03:10.155 CC lib/vhost/vhost_rpc.o 00:03:10.155 CC lib/vhost/vhost_blk.o 00:03:10.155 CC lib/vhost/rte_vhost_user.o 00:03:10.415 LIB libspdk_ftl.a 00:03:10.676 LIB libspdk_nvmf.a 00:03:10.935 LIB libspdk_vhost.a 00:03:10.935 LIB libspdk_iscsi.a 00:03:11.193 CC module/vfu_device/vfu_virtio.o 00:03:11.193 CC module/vfu_device/vfu_virtio_blk.o 00:03:11.193 CC module/env_dpdk/env_dpdk_rpc.o 00:03:11.193 CC module/vfu_device/vfu_virtio_scsi.o 00:03:11.193 CC module/vfu_device/vfu_virtio_rpc.o 00:03:11.451 CC module/keyring/linux/keyring.o 00:03:11.451 CC module/keyring/linux/keyring_rpc.o 00:03:11.451 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:11.451 LIB libspdk_env_dpdk_rpc.a 00:03:11.451 CC module/keyring/file/keyring.o 00:03:11.451 CC module/keyring/file/keyring_rpc.o 00:03:11.451 CC module/accel/error/accel_error.o 00:03:11.451 CC module/blob/bdev/blob_bdev.o 00:03:11.451 CC module/accel/error/accel_error_rpc.o 00:03:11.451 CC module/sock/posix/posix.o 00:03:11.451 CC module/accel/dsa/accel_dsa.o 00:03:11.451 CC module/scheduler/gscheduler/gscheduler.o 00:03:11.451 CC module/accel/dsa/accel_dsa_rpc.o 00:03:11.451 CC module/accel/iaa/accel_iaa_rpc.o 00:03:11.451 CC module/accel/iaa/accel_iaa.o 00:03:11.451 CC module/accel/ioat/accel_ioat_rpc.o 00:03:11.451 CC module/accel/ioat/accel_ioat.o 00:03:11.451 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:11.451 LIB libspdk_keyring_linux.a 00:03:11.451 LIB libspdk_scheduler_dpdk_governor.a 00:03:11.451 LIB libspdk_keyring_file.a 00:03:11.451 LIB libspdk_scheduler_gscheduler.a 00:03:11.451 LIB libspdk_accel_error.a 00:03:11.708 LIB libspdk_accel_ioat.a 00:03:11.708 LIB libspdk_scheduler_dynamic.a 00:03:11.708 LIB libspdk_accel_iaa.a 00:03:11.708 LIB libspdk_blob_bdev.a 00:03:11.708 LIB libspdk_accel_dsa.a 00:03:11.708 LIB libspdk_vfu_device.a 00:03:11.966 LIB libspdk_sock_posix.a 00:03:11.966 CC module/bdev/error/vbdev_error.o 00:03:11.966 CC module/bdev/error/vbdev_error_rpc.o 00:03:11.966 CC module/bdev/passthru/vbdev_passthru.o 00:03:11.966 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:11.966 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:11.966 CC module/bdev/nvme/bdev_nvme.o 00:03:11.966 CC module/bdev/nvme/nvme_rpc.o 00:03:11.966 CC module/bdev/nvme/bdev_mdns_client.o 00:03:11.966 CC module/bdev/nvme/vbdev_opal.o 00:03:11.966 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:11.966 CC module/bdev/malloc/bdev_malloc.o 00:03:11.966 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:11.966 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:11.966 CC module/bdev/delay/vbdev_delay.o 00:03:11.966 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:11.966 CC module/blobfs/bdev/blobfs_bdev.o 00:03:11.966 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:11.966 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:11.966 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:11.966 CC module/bdev/raid/bdev_raid.o 00:03:11.966 CC module/bdev/split/vbdev_split.o 00:03:11.966 CC module/bdev/aio/bdev_aio.o 00:03:11.966 CC module/bdev/split/vbdev_split_rpc.o 00:03:11.966 CC module/bdev/raid/raid0.o 00:03:11.966 CC module/bdev/ftl/bdev_ftl.o 00:03:11.966 CC module/bdev/raid/bdev_raid_rpc.o 00:03:11.966 CC module/bdev/aio/bdev_aio_rpc.o 00:03:11.966 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:11.966 CC module/bdev/raid/bdev_raid_sb.o 00:03:11.966 CC module/bdev/iscsi/bdev_iscsi.o 00:03:11.966 CC module/bdev/raid/raid1.o 00:03:11.966 CC module/bdev/gpt/gpt.o 00:03:11.966 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:11.966 CC module/bdev/gpt/vbdev_gpt.o 00:03:11.966 CC module/bdev/raid/concat.o 00:03:11.966 CC module/bdev/lvol/vbdev_lvol.o 00:03:11.966 CC module/bdev/null/bdev_null.o 00:03:11.966 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:11.966 CC module/bdev/null/bdev_null_rpc.o 00:03:11.966 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:11.966 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:11.966 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:12.224 LIB libspdk_blobfs_bdev.a 00:03:12.224 LIB libspdk_bdev_split.a 00:03:12.224 LIB libspdk_bdev_error.a 00:03:12.224 LIB libspdk_bdev_gpt.a 00:03:12.224 LIB libspdk_bdev_null.a 00:03:12.224 LIB libspdk_bdev_ftl.a 00:03:12.224 LIB libspdk_bdev_zone_block.a 00:03:12.224 LIB libspdk_bdev_aio.a 00:03:12.224 LIB libspdk_bdev_iscsi.a 00:03:12.224 LIB libspdk_bdev_malloc.a 00:03:12.224 LIB libspdk_bdev_delay.a 00:03:12.224 LIB libspdk_bdev_passthru.a 00:03:12.483 LIB libspdk_bdev_lvol.a 00:03:12.483 LIB libspdk_bdev_virtio.a 00:03:12.741 LIB libspdk_bdev_raid.a 00:03:13.307 LIB libspdk_bdev_nvme.a 00:03:13.873 CC module/event/subsystems/sock/sock.o 00:03:13.873 CC module/event/subsystems/keyring/keyring.o 00:03:13.873 CC module/event/subsystems/iobuf/iobuf.o 00:03:13.873 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:13.873 CC module/event/subsystems/scheduler/scheduler.o 00:03:13.873 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:03:13.874 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:13.874 CC module/event/subsystems/vmd/vmd.o 00:03:13.874 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:13.874 LIB libspdk_event_keyring.a 00:03:13.874 LIB libspdk_event_sock.a 00:03:13.874 LIB libspdk_event_scheduler.a 00:03:13.874 LIB libspdk_event_vmd.a 00:03:13.874 LIB libspdk_event_vfu_tgt.a 00:03:13.874 LIB libspdk_event_iobuf.a 00:03:13.874 LIB libspdk_event_vhost_blk.a 00:03:14.131 CC module/event/subsystems/accel/accel.o 00:03:14.389 LIB libspdk_event_accel.a 00:03:14.646 CC module/event/subsystems/bdev/bdev.o 00:03:14.646 LIB libspdk_event_bdev.a 00:03:14.904 CC module/event/subsystems/nbd/nbd.o 00:03:14.904 CC module/event/subsystems/ublk/ublk.o 00:03:14.904 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:14.904 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:14.904 CC module/event/subsystems/scsi/scsi.o 00:03:15.162 LIB libspdk_event_nbd.a 00:03:15.162 LIB libspdk_event_ublk.a 00:03:15.162 LIB libspdk_event_scsi.a 00:03:15.162 LIB libspdk_event_nvmf.a 00:03:15.421 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:15.421 CC module/event/subsystems/iscsi/iscsi.o 00:03:15.421 LIB libspdk_event_vhost_scsi.a 00:03:15.421 LIB libspdk_event_iscsi.a 00:03:15.679 CXX app/trace/trace.o 00:03:15.679 TEST_HEADER include/spdk/accel.h 00:03:15.679 TEST_HEADER include/spdk/accel_module.h 00:03:15.679 TEST_HEADER include/spdk/barrier.h 00:03:15.679 TEST_HEADER include/spdk/assert.h 00:03:15.679 TEST_HEADER include/spdk/bdev.h 00:03:15.679 TEST_HEADER include/spdk/base64.h 00:03:15.679 TEST_HEADER include/spdk/bdev_zone.h 00:03:15.679 TEST_HEADER include/spdk/bit_array.h 00:03:15.679 TEST_HEADER include/spdk/bdev_module.h 00:03:15.679 CC app/spdk_nvme_perf/perf.o 00:03:15.679 TEST_HEADER include/spdk/bit_pool.h 00:03:15.679 TEST_HEADER include/spdk/blob_bdev.h 00:03:15.679 TEST_HEADER include/spdk/blobfs.h 00:03:15.679 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:15.679 TEST_HEADER include/spdk/blob.h 00:03:15.679 TEST_HEADER include/spdk/conf.h 00:03:15.679 CC app/spdk_lspci/spdk_lspci.o 00:03:15.679 TEST_HEADER include/spdk/config.h 00:03:15.679 TEST_HEADER include/spdk/cpuset.h 00:03:15.679 TEST_HEADER include/spdk/crc16.h 00:03:15.679 TEST_HEADER include/spdk/crc32.h 00:03:15.679 TEST_HEADER include/spdk/crc64.h 00:03:15.679 CC test/rpc_client/rpc_client_test.o 00:03:15.679 TEST_HEADER include/spdk/dif.h 00:03:15.679 TEST_HEADER include/spdk/dma.h 00:03:15.679 TEST_HEADER include/spdk/endian.h 00:03:15.679 TEST_HEADER include/spdk/env_dpdk.h 00:03:15.679 TEST_HEADER include/spdk/env.h 00:03:15.679 TEST_HEADER include/spdk/event.h 00:03:15.679 TEST_HEADER include/spdk/fd_group.h 00:03:15.679 TEST_HEADER include/spdk/fd.h 00:03:15.679 TEST_HEADER include/spdk/file.h 00:03:15.679 TEST_HEADER include/spdk/ftl.h 00:03:15.679 CC app/spdk_top/spdk_top.o 00:03:15.679 TEST_HEADER include/spdk/hexlify.h 00:03:15.679 TEST_HEADER include/spdk/gpt_spec.h 00:03:15.679 TEST_HEADER include/spdk/histogram_data.h 00:03:15.679 TEST_HEADER include/spdk/idxd.h 00:03:15.679 CC app/spdk_nvme_discover/discovery_aer.o 00:03:15.679 TEST_HEADER include/spdk/idxd_spec.h 00:03:15.679 CC app/spdk_nvme_identify/identify.o 00:03:15.679 TEST_HEADER include/spdk/ioat.h 00:03:15.679 TEST_HEADER include/spdk/init.h 00:03:15.679 TEST_HEADER include/spdk/ioat_spec.h 00:03:15.679 TEST_HEADER include/spdk/iscsi_spec.h 00:03:15.679 TEST_HEADER include/spdk/json.h 00:03:15.679 CC app/trace_record/trace_record.o 00:03:15.679 TEST_HEADER include/spdk/jsonrpc.h 00:03:15.679 TEST_HEADER include/spdk/likely.h 00:03:15.679 TEST_HEADER include/spdk/keyring_module.h 00:03:15.679 TEST_HEADER include/spdk/keyring.h 00:03:15.679 TEST_HEADER include/spdk/log.h 00:03:15.679 TEST_HEADER include/spdk/lvol.h 00:03:15.679 TEST_HEADER include/spdk/memory.h 00:03:15.679 TEST_HEADER include/spdk/mmio.h 00:03:15.679 TEST_HEADER include/spdk/net.h 00:03:15.679 TEST_HEADER include/spdk/nbd.h 00:03:15.679 TEST_HEADER include/spdk/notify.h 00:03:15.679 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:15.679 TEST_HEADER include/spdk/nvme.h 00:03:15.679 TEST_HEADER include/spdk/nvme_intel.h 00:03:15.679 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:15.940 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:15.940 TEST_HEADER include/spdk/nvme_spec.h 00:03:15.940 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:15.940 TEST_HEADER include/spdk/nvme_zns.h 00:03:15.940 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:15.940 TEST_HEADER include/spdk/nvmf.h 00:03:15.940 TEST_HEADER include/spdk/nvmf_transport.h 00:03:15.940 TEST_HEADER include/spdk/nvmf_spec.h 00:03:15.940 TEST_HEADER include/spdk/opal_spec.h 00:03:15.940 TEST_HEADER include/spdk/pci_ids.h 00:03:15.940 TEST_HEADER include/spdk/opal.h 00:03:15.940 TEST_HEADER include/spdk/pipe.h 00:03:15.940 TEST_HEADER include/spdk/queue.h 00:03:15.940 TEST_HEADER include/spdk/rpc.h 00:03:15.940 TEST_HEADER include/spdk/scheduler.h 00:03:15.940 TEST_HEADER include/spdk/scsi.h 00:03:15.940 TEST_HEADER include/spdk/scsi_spec.h 00:03:15.940 TEST_HEADER include/spdk/sock.h 00:03:15.940 TEST_HEADER include/spdk/reduce.h 00:03:15.940 TEST_HEADER include/spdk/stdinc.h 00:03:15.940 TEST_HEADER include/spdk/string.h 00:03:15.940 TEST_HEADER include/spdk/trace.h 00:03:15.940 TEST_HEADER include/spdk/trace_parser.h 00:03:15.940 TEST_HEADER include/spdk/thread.h 00:03:15.940 TEST_HEADER include/spdk/tree.h 00:03:15.940 TEST_HEADER include/spdk/ublk.h 00:03:15.940 TEST_HEADER include/spdk/version.h 00:03:15.940 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:15.940 TEST_HEADER include/spdk/util.h 00:03:15.940 TEST_HEADER include/spdk/uuid.h 00:03:15.940 TEST_HEADER include/spdk/vhost.h 00:03:15.940 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:15.940 TEST_HEADER include/spdk/vmd.h 00:03:15.940 TEST_HEADER include/spdk/xor.h 00:03:15.940 CC app/nvmf_tgt/nvmf_main.o 00:03:15.940 TEST_HEADER include/spdk/zipf.h 00:03:15.940 CXX test/cpp_headers/accel_module.o 00:03:15.940 CXX test/cpp_headers/assert.o 00:03:15.940 CXX test/cpp_headers/accel.o 00:03:15.940 CXX test/cpp_headers/base64.o 00:03:15.940 CXX test/cpp_headers/bdev.o 00:03:15.940 CXX test/cpp_headers/barrier.o 00:03:15.940 CXX test/cpp_headers/bdev_zone.o 00:03:15.940 CXX test/cpp_headers/bdev_module.o 00:03:15.940 CXX test/cpp_headers/bit_pool.o 00:03:15.940 CXX test/cpp_headers/bit_array.o 00:03:15.940 CXX test/cpp_headers/blobfs_bdev.o 00:03:15.940 CXX test/cpp_headers/blob_bdev.o 00:03:15.940 CXX test/cpp_headers/blobfs.o 00:03:15.940 CC app/iscsi_tgt/iscsi_tgt.o 00:03:15.940 CXX test/cpp_headers/conf.o 00:03:15.940 CXX test/cpp_headers/config.o 00:03:15.940 CXX test/cpp_headers/blob.o 00:03:15.940 CXX test/cpp_headers/cpuset.o 00:03:15.940 CXX test/cpp_headers/crc16.o 00:03:15.940 CXX test/cpp_headers/crc32.o 00:03:15.940 CXX test/cpp_headers/crc64.o 00:03:15.940 CXX test/cpp_headers/dif.o 00:03:15.940 CXX test/cpp_headers/dma.o 00:03:15.940 CXX test/cpp_headers/endian.o 00:03:15.940 CXX test/cpp_headers/env_dpdk.o 00:03:15.940 CXX test/cpp_headers/env.o 00:03:15.940 CXX test/cpp_headers/event.o 00:03:15.940 CXX test/cpp_headers/fd.o 00:03:15.940 CXX test/cpp_headers/fd_group.o 00:03:15.940 CXX test/cpp_headers/file.o 00:03:15.940 CXX test/cpp_headers/ftl.o 00:03:15.940 CXX test/cpp_headers/gpt_spec.o 00:03:15.940 CXX test/cpp_headers/hexlify.o 00:03:15.940 CXX test/cpp_headers/idxd.o 00:03:15.940 CC app/spdk_dd/spdk_dd.o 00:03:15.940 CXX test/cpp_headers/histogram_data.o 00:03:15.940 CXX test/cpp_headers/idxd_spec.o 00:03:15.940 CXX test/cpp_headers/ioat.o 00:03:15.940 CXX test/cpp_headers/init.o 00:03:15.940 CXX test/cpp_headers/iscsi_spec.o 00:03:15.940 CXX test/cpp_headers/ioat_spec.o 00:03:15.941 CXX test/cpp_headers/json.o 00:03:15.941 CXX test/cpp_headers/jsonrpc.o 00:03:15.941 CXX test/cpp_headers/keyring.o 00:03:15.941 CXX test/cpp_headers/likely.o 00:03:15.941 CXX test/cpp_headers/keyring_module.o 00:03:15.941 CXX test/cpp_headers/log.o 00:03:15.941 CC test/thread/lock/spdk_lock.o 00:03:15.941 CXX test/cpp_headers/lvol.o 00:03:15.941 CXX test/cpp_headers/memory.o 00:03:15.941 CXX test/cpp_headers/mmio.o 00:03:15.941 CXX test/cpp_headers/nbd.o 00:03:15.941 CXX test/cpp_headers/net.o 00:03:15.941 CXX test/cpp_headers/notify.o 00:03:15.941 CXX test/cpp_headers/nvme_intel.o 00:03:15.941 CXX test/cpp_headers/nvme.o 00:03:15.941 CXX test/cpp_headers/nvme_spec.o 00:03:15.941 CXX test/cpp_headers/nvme_ocssd.o 00:03:15.941 CXX test/cpp_headers/nvme_zns.o 00:03:15.941 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:15.941 CC test/env/pci/pci_ut.o 00:03:15.941 CC test/thread/poller_perf/poller_perf.o 00:03:15.941 CC test/env/vtophys/vtophys.o 00:03:15.941 CC app/spdk_tgt/spdk_tgt.o 00:03:15.941 CC test/env/memory/memory_ut.o 00:03:15.941 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:15.941 CC examples/util/zipf/zipf.o 00:03:15.941 CC examples/ioat/perf/perf.o 00:03:15.941 CC test/app/histogram_perf/histogram_perf.o 00:03:15.941 CC test/app/stub/stub.o 00:03:15.941 CC test/app/jsoncat/jsoncat.o 00:03:15.941 CXX test/cpp_headers/nvmf_cmd.o 00:03:15.941 CC examples/ioat/verify/verify.o 00:03:15.941 CC app/fio/nvme/fio_plugin.o 00:03:15.941 LINK spdk_lspci 00:03:15.941 CC test/dma/test_dma/test_dma.o 00:03:15.941 CC test/app/bdev_svc/bdev_svc.o 00:03:15.941 CC app/fio/bdev/fio_plugin.o 00:03:15.941 LINK rpc_client_test 00:03:15.941 CC test/env/mem_callbacks/mem_callbacks.o 00:03:15.941 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:15.941 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:15.941 LINK spdk_nvme_discover 00:03:15.941 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:15.941 LINK poller_perf 00:03:15.941 CXX test/cpp_headers/nvmf.o 00:03:16.200 CXX test/cpp_headers/nvmf_spec.o 00:03:16.200 CXX test/cpp_headers/nvmf_transport.o 00:03:16.200 CXX test/cpp_headers/opal.o 00:03:16.200 CXX test/cpp_headers/opal_spec.o 00:03:16.200 CXX test/cpp_headers/pci_ids.o 00:03:16.200 CXX test/cpp_headers/pipe.o 00:03:16.200 LINK jsoncat 00:03:16.200 CXX test/cpp_headers/queue.o 00:03:16.200 CXX test/cpp_headers/reduce.o 00:03:16.200 LINK vtophys 00:03:16.200 CXX test/cpp_headers/rpc.o 00:03:16.200 CXX test/cpp_headers/scheduler.o 00:03:16.200 CXX test/cpp_headers/scsi.o 00:03:16.200 CXX test/cpp_headers/scsi_spec.o 00:03:16.200 LINK interrupt_tgt 00:03:16.200 CXX test/cpp_headers/sock.o 00:03:16.200 CXX test/cpp_headers/stdinc.o 00:03:16.200 CXX test/cpp_headers/string.o 00:03:16.200 CXX test/cpp_headers/thread.o 00:03:16.200 CXX test/cpp_headers/trace.o 00:03:16.200 CXX test/cpp_headers/trace_parser.o 00:03:16.200 CXX test/cpp_headers/tree.o 00:03:16.200 CXX test/cpp_headers/ublk.o 00:03:16.200 CXX test/cpp_headers/util.o 00:03:16.200 CXX test/cpp_headers/uuid.o 00:03:16.200 CXX test/cpp_headers/version.o 00:03:16.200 LINK histogram_perf 00:03:16.200 CXX test/cpp_headers/vfio_user_pci.o 00:03:16.200 CXX test/cpp_headers/vfio_user_spec.o 00:03:16.200 CXX test/cpp_headers/vhost.o 00:03:16.200 CXX test/cpp_headers/vmd.o 00:03:16.200 CXX test/cpp_headers/xor.o 00:03:16.200 CXX test/cpp_headers/zipf.o 00:03:16.200 LINK env_dpdk_post_init 00:03:16.200 LINK spdk_trace_record 00:03:16.200 LINK nvmf_tgt 00:03:16.200 LINK zipf 00:03:16.200 LINK iscsi_tgt 00:03:16.201 LINK stub 00:03:16.201 LINK verify 00:03:16.201 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:16.201 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:16.201 LINK ioat_perf 00:03:16.201 LINK bdev_svc 00:03:16.201 LINK spdk_tgt 00:03:16.201 LINK spdk_trace 00:03:16.201 CC test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.o 00:03:16.201 CC test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.o 00:03:16.201 LINK pci_ut 00:03:16.459 LINK spdk_dd 00:03:16.459 LINK test_dma 00:03:16.459 LINK spdk_bdev 00:03:16.459 LINK mem_callbacks 00:03:16.459 LINK nvme_fuzz 00:03:16.459 LINK spdk_nvme_identify 00:03:16.459 LINK spdk_nvme 00:03:16.459 LINK llvm_vfio_fuzz 00:03:16.459 LINK spdk_nvme_perf 00:03:16.717 LINK vhost_fuzz 00:03:16.717 CC app/vhost/vhost.o 00:03:16.717 LINK spdk_top 00:03:16.717 LINK llvm_nvme_fuzz 00:03:16.717 CC examples/sock/hello_world/hello_sock.o 00:03:16.717 CC examples/idxd/perf/perf.o 00:03:16.717 CC examples/vmd/led/led.o 00:03:16.717 CC examples/vmd/lsvmd/lsvmd.o 00:03:16.717 CC examples/thread/thread/thread_ex.o 00:03:16.975 LINK vhost 00:03:16.975 LINK led 00:03:16.975 LINK lsvmd 00:03:16.975 LINK memory_ut 00:03:16.975 LINK hello_sock 00:03:16.975 LINK idxd_perf 00:03:16.975 LINK thread 00:03:16.975 LINK spdk_lock 00:03:17.234 LINK iscsi_fuzz 00:03:17.799 CC examples/nvme/abort/abort.o 00:03:17.799 CC examples/nvme/reconnect/reconnect.o 00:03:17.799 CC examples/nvme/arbitration/arbitration.o 00:03:17.799 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:17.799 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:17.799 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:17.799 CC examples/nvme/hello_world/hello_world.o 00:03:17.799 CC examples/nvme/hotplug/hotplug.o 00:03:17.799 CC test/event/event_perf/event_perf.o 00:03:17.799 CC test/event/reactor_perf/reactor_perf.o 00:03:17.799 CC test/event/reactor/reactor.o 00:03:17.799 CC test/event/app_repeat/app_repeat.o 00:03:17.799 CC test/event/scheduler/scheduler.o 00:03:17.799 LINK pmr_persistence 00:03:17.799 LINK cmb_copy 00:03:17.799 LINK hotplug 00:03:17.799 LINK hello_world 00:03:17.799 LINK event_perf 00:03:17.799 LINK reactor 00:03:17.799 LINK reactor_perf 00:03:17.799 LINK app_repeat 00:03:17.799 LINK reconnect 00:03:18.057 LINK abort 00:03:18.057 LINK arbitration 00:03:18.057 LINK nvme_manage 00:03:18.057 LINK scheduler 00:03:18.057 CC test/nvme/overhead/overhead.o 00:03:18.057 CC test/nvme/aer/aer.o 00:03:18.057 CC test/nvme/startup/startup.o 00:03:18.057 CC test/nvme/sgl/sgl.o 00:03:18.057 CC test/nvme/reserve/reserve.o 00:03:18.057 CC test/nvme/cuse/cuse.o 00:03:18.057 CC test/nvme/boot_partition/boot_partition.o 00:03:18.057 CC test/nvme/compliance/nvme_compliance.o 00:03:18.057 CC test/nvme/e2edp/nvme_dp.o 00:03:18.057 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:18.057 CC test/nvme/err_injection/err_injection.o 00:03:18.057 CC test/nvme/fused_ordering/fused_ordering.o 00:03:18.057 CC test/nvme/fdp/fdp.o 00:03:18.057 CC test/nvme/reset/reset.o 00:03:18.057 CC test/nvme/simple_copy/simple_copy.o 00:03:18.057 CC test/nvme/connect_stress/connect_stress.o 00:03:18.057 CC test/blobfs/mkfs/mkfs.o 00:03:18.057 CC test/accel/dif/dif.o 00:03:18.315 CC test/lvol/esnap/esnap.o 00:03:18.315 LINK startup 00:03:18.315 LINK boot_partition 00:03:18.315 LINK doorbell_aers 00:03:18.315 LINK err_injection 00:03:18.315 LINK reserve 00:03:18.315 LINK fused_ordering 00:03:18.315 LINK connect_stress 00:03:18.315 LINK mkfs 00:03:18.315 LINK simple_copy 00:03:18.315 LINK aer 00:03:18.315 LINK nvme_dp 00:03:18.315 LINK overhead 00:03:18.315 LINK sgl 00:03:18.315 LINK reset 00:03:18.315 LINK fdp 00:03:18.315 LINK nvme_compliance 00:03:18.574 LINK dif 00:03:18.831 CC examples/accel/perf/accel_perf.o 00:03:18.831 CC examples/blob/cli/blobcli.o 00:03:18.831 CC examples/blob/hello_world/hello_blob.o 00:03:19.089 LINK hello_blob 00:03:19.089 LINK cuse 00:03:19.089 LINK accel_perf 00:03:19.089 LINK blobcli 00:03:19.656 CC examples/bdev/hello_world/hello_bdev.o 00:03:19.656 CC examples/bdev/bdevperf/bdevperf.o 00:03:19.914 LINK hello_bdev 00:03:19.914 CC test/bdev/bdevio/bdevio.o 00:03:20.173 LINK bdevperf 00:03:20.173 LINK bdevio 00:03:21.548 CC examples/nvmf/nvmf/nvmf.o 00:03:21.549 LINK esnap 00:03:21.807 LINK nvmf 00:03:22.743 00:03:22.743 real 0m40.578s 00:03:22.743 user 6m3.435s 00:03:22.743 sys 2m2.027s 00:03:22.743 09:18:35 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:03:22.743 09:18:35 make -- common/autotest_common.sh@10 -- $ set +x 00:03:22.743 ************************************ 00:03:22.743 END TEST make 00:03:22.743 ************************************ 00:03:22.743 09:18:35 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:22.743 09:18:35 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:22.743 09:18:35 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:22.743 09:18:35 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:22.743 09:18:35 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:03:22.743 09:18:35 -- pm/common@44 -- $ pid=288204 00:03:22.743 09:18:35 -- pm/common@50 -- $ kill -TERM 288204 00:03:22.743 09:18:35 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:22.743 09:18:35 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:03:22.743 09:18:35 -- pm/common@44 -- $ pid=288206 00:03:22.743 09:18:35 -- pm/common@50 -- $ kill -TERM 288206 00:03:22.743 09:18:35 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:22.743 09:18:35 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:03:22.743 09:18:35 -- pm/common@44 -- $ pid=288208 00:03:22.743 09:18:35 -- pm/common@50 -- $ kill -TERM 288208 00:03:22.743 09:18:35 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:22.743 09:18:35 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:03:22.743 09:18:35 -- pm/common@44 -- $ pid=288227 00:03:22.743 09:18:35 -- pm/common@50 -- $ sudo -E kill -TERM 288227 00:03:23.001 09:18:35 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh 00:03:23.001 09:18:35 -- nvmf/common.sh@7 -- # uname -s 00:03:23.001 09:18:35 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:23.001 09:18:35 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:23.001 09:18:35 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:23.001 09:18:35 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:23.002 09:18:35 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:23.002 09:18:35 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:23.002 09:18:35 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:23.002 09:18:35 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:23.002 09:18:35 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:23.002 09:18:35 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:23.002 09:18:35 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0055cbfc-b15d-ea11-906e-0017a4403562 00:03:23.002 09:18:35 -- nvmf/common.sh@18 -- # NVME_HOSTID=0055cbfc-b15d-ea11-906e-0017a4403562 00:03:23.002 09:18:35 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:23.002 09:18:35 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:23.002 09:18:35 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:23.002 09:18:35 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:23.002 09:18:35 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:03:23.002 09:18:35 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:23.002 09:18:35 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:23.002 09:18:35 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:23.002 09:18:35 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:23.002 09:18:35 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:23.002 09:18:35 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:23.002 09:18:35 -- paths/export.sh@5 -- # export PATH 00:03:23.002 09:18:35 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:23.002 09:18:35 -- nvmf/common.sh@47 -- # : 0 00:03:23.002 09:18:35 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:03:23.002 09:18:35 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:03:23.002 09:18:35 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:23.002 09:18:35 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:23.002 09:18:35 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:23.002 09:18:35 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:03:23.002 09:18:35 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:03:23.002 09:18:35 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:03:23.002 09:18:35 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:23.002 09:18:35 -- spdk/autotest.sh@32 -- # uname -s 00:03:23.002 09:18:35 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:23.002 09:18:35 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:23.002 09:18:35 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/coredumps 00:03:23.002 09:18:35 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:03:23.002 09:18:35 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/coredumps 00:03:23.002 09:18:35 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:23.002 09:18:35 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:23.002 09:18:35 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:23.002 09:18:35 -- spdk/autotest.sh@48 -- # udevadm_pid=348179 00:03:23.002 09:18:35 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:23.002 09:18:35 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:23.002 09:18:35 -- pm/common@17 -- # local monitor 00:03:23.002 09:18:35 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:23.002 09:18:35 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:23.002 09:18:35 -- pm/common@21 -- # date +%s 00:03:23.002 09:18:35 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:23.002 09:18:35 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:23.002 09:18:35 -- pm/common@21 -- # date +%s 00:03:23.002 09:18:35 -- pm/common@25 -- # sleep 1 00:03:23.002 09:18:35 -- pm/common@21 -- # date +%s 00:03:23.002 09:18:35 -- pm/common@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721891915 00:03:23.002 09:18:35 -- pm/common@21 -- # date +%s 00:03:23.002 09:18:35 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721891915 00:03:23.002 09:18:35 -- pm/common@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721891915 00:03:23.002 09:18:35 -- pm/common@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721891915 00:03:23.002 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721891915_collect-cpu-temp.pm.log 00:03:23.002 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721891915_collect-vmstat.pm.log 00:03:23.002 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721891915_collect-cpu-load.pm.log 00:03:23.002 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721891915_collect-bmc-pm.bmc.pm.log 00:03:23.940 09:18:36 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:23.940 09:18:36 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:23.940 09:18:36 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:23.940 09:18:36 -- common/autotest_common.sh@10 -- # set +x 00:03:23.940 09:18:36 -- spdk/autotest.sh@59 -- # create_test_list 00:03:23.940 09:18:36 -- common/autotest_common.sh@748 -- # xtrace_disable 00:03:23.940 09:18:36 -- common/autotest_common.sh@10 -- # set +x 00:03:24.200 09:18:36 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/autotest.sh 00:03:24.200 09:18:36 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:03:24.200 09:18:36 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:03:24.200 09:18:36 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output 00:03:24.200 09:18:36 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:03:24.200 09:18:36 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:24.200 09:18:36 -- common/autotest_common.sh@1455 -- # uname 00:03:24.200 09:18:36 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:03:24.200 09:18:36 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:24.200 09:18:36 -- common/autotest_common.sh@1475 -- # uname 00:03:24.200 09:18:36 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:03:24.200 09:18:36 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:03:24.200 09:18:36 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=clang 00:03:24.200 09:18:36 -- spdk/autotest.sh@72 -- # hash lcov 00:03:24.200 09:18:36 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=clang == *\c\l\a\n\g* ]] 00:03:24.200 09:18:36 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:03:24.200 09:18:36 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:24.200 09:18:36 -- common/autotest_common.sh@10 -- # set +x 00:03:24.200 09:18:36 -- spdk/autotest.sh@91 -- # rm -f 00:03:24.200 09:18:36 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:03:27.488 0000:dd:00.0 (8086 0a54): Already using the nvme driver 00:03:27.488 0000:df:00.0 (8086 0a54): Already using the nvme driver 00:03:27.747 0000:de:00.0 (8086 0953): Already using the nvme driver 00:03:27.747 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:03:27.747 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:03:27.747 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:03:27.747 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:03:27.747 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:03:27.747 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:03:27.747 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:03:27.747 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:03:27.747 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:03:27.747 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:03:27.747 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:03:27.747 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:03:28.007 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:03:28.007 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:03:28.007 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:03:28.007 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:03:28.007 0000:dc:00.0 (8086 0953): Already using the nvme driver 00:03:28.007 09:18:40 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:03:28.007 09:18:40 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:28.007 09:18:40 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:28.007 09:18:40 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:28.007 09:18:40 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:28.007 09:18:40 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:28.007 09:18:40 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:28.007 09:18:40 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:28.007 09:18:40 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:28.007 09:18:40 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:28.007 09:18:40 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:03:28.007 09:18:40 -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:03:28.007 09:18:40 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:28.007 09:18:40 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:28.007 09:18:40 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:28.007 09:18:40 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n1 00:03:28.007 09:18:40 -- common/autotest_common.sh@1662 -- # local device=nvme2n1 00:03:28.007 09:18:40 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:03:28.007 09:18:40 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:28.007 09:18:40 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:28.007 09:18:40 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3n1 00:03:28.007 09:18:40 -- common/autotest_common.sh@1662 -- # local device=nvme3n1 00:03:28.007 09:18:40 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:03:28.007 09:18:40 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:28.007 09:18:40 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:03:28.007 09:18:40 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:28.007 09:18:40 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:28.007 09:18:40 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:03:28.007 09:18:40 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:03:28.007 09:18:40 -- scripts/common.sh@387 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:28.007 No valid GPT data, bailing 00:03:28.007 09:18:40 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:28.007 09:18:40 -- scripts/common.sh@391 -- # pt= 00:03:28.007 09:18:40 -- scripts/common.sh@392 -- # return 1 00:03:28.007 09:18:40 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:28.007 1+0 records in 00:03:28.007 1+0 records out 00:03:28.007 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00527658 s, 199 MB/s 00:03:28.007 09:18:40 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:28.007 09:18:40 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:28.007 09:18:40 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n1 00:03:28.007 09:18:40 -- scripts/common.sh@378 -- # local block=/dev/nvme1n1 pt 00:03:28.007 09:18:40 -- scripts/common.sh@387 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:03:28.007 No valid GPT data, bailing 00:03:28.007 09:18:40 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:03:28.007 09:18:40 -- scripts/common.sh@391 -- # pt= 00:03:28.007 09:18:40 -- scripts/common.sh@392 -- # return 1 00:03:28.007 09:18:40 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:03:28.007 1+0 records in 00:03:28.007 1+0 records out 00:03:28.007 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00432225 s, 243 MB/s 00:03:28.007 09:18:40 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:28.007 09:18:40 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:28.007 09:18:40 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme2n1 00:03:28.007 09:18:40 -- scripts/common.sh@378 -- # local block=/dev/nvme2n1 pt 00:03:28.007 09:18:40 -- scripts/common.sh@387 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:03:28.266 No valid GPT data, bailing 00:03:28.266 09:18:40 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:03:28.266 09:18:40 -- scripts/common.sh@391 -- # pt= 00:03:28.266 09:18:40 -- scripts/common.sh@392 -- # return 1 00:03:28.266 09:18:40 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:03:28.266 1+0 records in 00:03:28.266 1+0 records out 00:03:28.266 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00422521 s, 248 MB/s 00:03:28.266 09:18:40 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:28.266 09:18:40 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:28.266 09:18:40 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme3n1 00:03:28.266 09:18:40 -- scripts/common.sh@378 -- # local block=/dev/nvme3n1 pt 00:03:28.266 09:18:40 -- scripts/common.sh@387 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:03:28.266 No valid GPT data, bailing 00:03:28.266 09:18:40 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:03:28.266 09:18:40 -- scripts/common.sh@391 -- # pt= 00:03:28.266 09:18:40 -- scripts/common.sh@392 -- # return 1 00:03:28.266 09:18:40 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:03:28.266 1+0 records in 00:03:28.266 1+0 records out 00:03:28.266 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00412115 s, 254 MB/s 00:03:28.266 09:18:40 -- spdk/autotest.sh@118 -- # sync 00:03:28.266 09:18:40 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:28.266 09:18:40 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:28.266 09:18:40 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:33.539 09:18:45 -- spdk/autotest.sh@124 -- # uname -s 00:03:33.539 09:18:45 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:03:33.539 09:18:45 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/test-setup.sh 00:03:33.539 09:18:45 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:33.539 09:18:45 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:33.539 09:18:45 -- common/autotest_common.sh@10 -- # set +x 00:03:33.539 ************************************ 00:03:33.539 START TEST setup.sh 00:03:33.539 ************************************ 00:03:33.539 09:18:45 setup.sh -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/test-setup.sh 00:03:33.539 * Looking for test storage... 00:03:33.539 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:03:33.539 09:18:45 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:03:33.539 09:18:45 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:33.539 09:18:45 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/acl.sh 00:03:33.539 09:18:45 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:33.539 09:18:45 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:33.539 09:18:45 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:33.539 ************************************ 00:03:33.539 START TEST acl 00:03:33.539 ************************************ 00:03:33.539 09:18:45 setup.sh.acl -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/acl.sh 00:03:33.539 * Looking for test storage... 00:03:33.539 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:03:33.539 09:18:45 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:03:33.539 09:18:45 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:33.539 09:18:45 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:33.539 09:18:45 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:33.539 09:18:45 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:33.539 09:18:45 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:33.539 09:18:45 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:33.539 09:18:45 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:33.539 09:18:45 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:33.539 09:18:45 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:33.539 09:18:45 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:03:33.539 09:18:45 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:03:33.539 09:18:45 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:33.539 09:18:45 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:33.539 09:18:45 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:33.539 09:18:45 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n1 00:03:33.539 09:18:45 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme2n1 00:03:33.539 09:18:45 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:03:33.539 09:18:45 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:33.539 09:18:45 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:33.539 09:18:45 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3n1 00:03:33.539 09:18:45 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme3n1 00:03:33.539 09:18:45 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:03:33.539 09:18:45 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:33.539 09:18:45 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:03:33.539 09:18:45 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:03:33.539 09:18:45 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:03:33.539 09:18:45 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:03:33.539 09:18:45 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:03:33.539 09:18:45 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:33.539 09:18:45 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:03:37.734 09:18:49 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:03:37.734 09:18:49 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:03:37.734 09:18:49 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:37.734 09:18:49 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:03:37.734 09:18:49 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:03:37.734 09:18:49 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh status 00:03:40.272 Hugepages 00:03:40.272 node hugesize free / total 00:03:40.272 09:18:53 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:40.272 09:18:53 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:40.272 09:18:53 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:40.272 09:18:53 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:40.272 09:18:53 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:40.272 09:18:53 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:40.272 09:18:53 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:40.272 09:18:53 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:40.272 09:18:53 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:40.272 00:03:40.272 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:40.272 09:18:53 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:40.272 09:18:53 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:40.272 09:18:53 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:40.272 09:18:53 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:03:40.272 09:18:53 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:40.272 09:18:53 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:40.272 09:18:53 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:40.272 09:18:53 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:03:40.272 09:18:53 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:40.272 09:18:53 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:40.272 09:18:53 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:40.532 09:18:53 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:03:40.532 09:18:53 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:40.532 09:18:53 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:40.532 09:18:53 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:40.532 09:18:53 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:03:40.532 09:18:53 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:40.532 09:18:53 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:40.532 09:18:53 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:40.532 09:18:53 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:03:40.532 09:18:53 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:40.532 09:18:53 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:40.532 09:18:53 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:40.532 09:18:53 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:03:40.532 09:18:53 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:40.532 09:18:53 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:40.532 09:18:53 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:40.532 09:18:53 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:03:40.532 09:18:53 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:40.532 09:18:53 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:40.532 09:18:53 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:40.532 09:18:53 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:03:40.532 09:18:53 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:40.532 09:18:53 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:40.532 09:18:53 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:40.532 09:18:53 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:03:40.532 09:18:53 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:40.532 09:18:53 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:40.532 09:18:53 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:40.532 09:18:53 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:03:40.532 09:18:53 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:40.532 09:18:53 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:40.532 09:18:53 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:40.532 09:18:53 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:03:40.532 09:18:53 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:40.533 09:18:53 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:40.533 09:18:53 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:40.533 09:18:53 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:03:40.533 09:18:53 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:40.533 09:18:53 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:40.533 09:18:53 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:40.533 09:18:53 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:03:40.533 09:18:53 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:40.533 09:18:53 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:40.533 09:18:53 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:40.533 09:18:53 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:03:40.533 09:18:53 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:40.533 09:18:53 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:40.533 09:18:53 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:40.533 09:18:53 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:03:40.533 09:18:53 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:40.533 09:18:53 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:40.533 09:18:53 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:40.533 09:18:53 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:03:40.533 09:18:53 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:40.533 09:18:53 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:40.533 09:18:53 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:40.533 09:18:53 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:dc:00.0 == *:*:*.* ]] 00:03:40.533 09:18:53 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:40.533 09:18:53 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\d\c\:\0\0\.\0* ]] 00:03:40.533 09:18:53 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:40.533 09:18:53 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:40.533 09:18:53 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:40.533 09:18:53 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:dd:00.0 == *:*:*.* ]] 00:03:40.533 09:18:53 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:40.533 09:18:53 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\d\d\:\0\0\.\0* ]] 00:03:40.533 09:18:53 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:40.533 09:18:53 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:40.533 09:18:53 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:40.791 09:18:53 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:de:00.0 == *:*:*.* ]] 00:03:40.791 09:18:53 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:40.791 09:18:53 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\d\e\:\0\0\.\0* ]] 00:03:40.791 09:18:53 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:40.791 09:18:53 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:40.791 09:18:53 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:40.791 09:18:53 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:df:00.0 == *:*:*.* ]] 00:03:40.791 09:18:53 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:40.791 09:18:53 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\d\f\:\0\0\.\0* ]] 00:03:40.791 09:18:53 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:40.791 09:18:53 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:40.791 09:18:53 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:40.791 09:18:53 setup.sh.acl -- setup/acl.sh@24 -- # (( 4 > 0 )) 00:03:40.791 09:18:53 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:03:40.791 09:18:53 setup.sh.acl -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:40.791 09:18:53 setup.sh.acl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:40.791 09:18:53 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:40.791 ************************************ 00:03:40.791 START TEST denied 00:03:40.791 ************************************ 00:03:40.791 09:18:53 setup.sh.acl.denied -- common/autotest_common.sh@1125 -- # denied 00:03:40.791 09:18:53 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:dc:00.0' 00:03:40.791 09:18:53 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:03:40.791 09:18:53 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:dc:00.0' 00:03:40.791 09:18:53 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:03:40.791 09:18:53 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:03:46.070 0000:dc:00.0 (8086 0953): Skipping denied controller at 0000:dc:00.0 00:03:46.070 09:18:58 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:dc:00.0 00:03:46.070 09:18:58 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:03:46.070 09:18:58 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:03:46.070 09:18:58 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:dc:00.0 ]] 00:03:46.070 09:18:58 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:dc:00.0/driver 00:03:46.070 09:18:58 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:46.070 09:18:58 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:46.070 09:18:58 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:03:46.070 09:18:58 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:46.070 09:18:58 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:03:54.196 00:03:54.196 real 0m12.267s 00:03:54.196 user 0m2.488s 00:03:54.196 sys 0m4.569s 00:03:54.196 09:19:05 setup.sh.acl.denied -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:54.196 09:19:05 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:03:54.196 ************************************ 00:03:54.196 END TEST denied 00:03:54.196 ************************************ 00:03:54.196 09:19:05 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:54.196 09:19:05 setup.sh.acl -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:54.196 09:19:05 setup.sh.acl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:54.196 09:19:05 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:54.196 ************************************ 00:03:54.196 START TEST allowed 00:03:54.196 ************************************ 00:03:54.196 09:19:05 setup.sh.acl.allowed -- common/autotest_common.sh@1125 -- # allowed 00:03:54.196 09:19:05 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:dc:00.0 00:03:54.196 09:19:05 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:03:54.196 09:19:05 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:dc:00.0 .*: nvme -> .*' 00:03:54.196 09:19:05 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:03:54.196 09:19:05 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:03:59.470 0000:dc:00.0 (8086 0953): nvme -> vfio-pci 00:03:59.470 09:19:11 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 0000:dd:00.0 0000:de:00.0 0000:df:00.0 00:03:59.470 09:19:11 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:03:59.470 09:19:11 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:03:59.470 09:19:11 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:dd:00.0 ]] 00:03:59.470 09:19:11 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:dd:00.0/driver 00:03:59.470 09:19:11 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:59.470 09:19:11 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:59.470 09:19:11 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:03:59.470 09:19:11 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:de:00.0 ]] 00:03:59.470 09:19:11 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:de:00.0/driver 00:03:59.470 09:19:11 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:59.470 09:19:11 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:59.470 09:19:11 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:03:59.470 09:19:11 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:df:00.0 ]] 00:03:59.470 09:19:11 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:df:00.0/driver 00:03:59.470 09:19:11 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:59.470 09:19:11 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:59.470 09:19:11 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:03:59.470 09:19:11 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:59.470 09:19:11 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:04:06.042 00:04:06.042 real 0m11.804s 00:04:06.042 user 0m2.642s 00:04:06.042 sys 0m4.505s 00:04:06.042 09:19:17 setup.sh.acl.allowed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:06.042 09:19:17 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:04:06.042 ************************************ 00:04:06.042 END TEST allowed 00:04:06.042 ************************************ 00:04:06.042 00:04:06.042 real 0m31.857s 00:04:06.042 user 0m8.062s 00:04:06.042 sys 0m14.150s 00:04:06.042 09:19:17 setup.sh.acl -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:06.042 09:19:17 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:06.042 ************************************ 00:04:06.042 END TEST acl 00:04:06.042 ************************************ 00:04:06.042 09:19:17 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/hugepages.sh 00:04:06.042 09:19:17 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:06.042 09:19:17 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:06.042 09:19:17 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:06.042 ************************************ 00:04:06.042 START TEST hugepages 00:04:06.042 ************************************ 00:04:06.042 09:19:17 setup.sh.hugepages -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/hugepages.sh 00:04:06.042 * Looking for test storage... 00:04:06.042 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:04:06.042 09:19:17 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:04:06.042 09:19:17 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:04:06.042 09:19:17 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:04:06.042 09:19:17 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:04:06.042 09:19:17 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:04:06.042 09:19:17 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:04:06.042 09:19:17 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:04:06.042 09:19:17 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:04:06.042 09:19:17 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:04:06.042 09:19:17 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:04:06.042 09:19:17 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.042 09:19:17 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:06.042 09:19:17 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:06.042 09:19:17 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.042 09:19:17 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.042 09:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.042 09:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.042 09:19:17 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60294968 kB' 'MemFree: 41970664 kB' 'MemAvailable: 45442532 kB' 'Buffers: 12144 kB' 'Cached: 10858804 kB' 'SwapCached: 0 kB' 'Active: 8292744 kB' 'Inactive: 3449392 kB' 'Active(anon): 7873192 kB' 'Inactive(anon): 0 kB' 'Active(file): 419552 kB' 'Inactive(file): 3449392 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 874968 kB' 'Mapped: 154520 kB' 'Shmem: 7002004 kB' 'KReclaimable: 199720 kB' 'Slab: 598380 kB' 'SReclaimable: 199720 kB' 'SUnreclaim: 398660 kB' 'KernelStack: 19088 kB' 'PageTables: 8804 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36438936 kB' 'Committed_AS: 9720972 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 208008 kB' 'VmallocChunk: 0 kB' 'Percpu: 62208 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 498644 kB' 'DirectMap2M: 12812288 kB' 'DirectMap1G: 55574528 kB' 00:04:06.042 09:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.042 09:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.042 09:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.042 09:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.042 09:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.042 09:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.042 09:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.042 09:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.042 09:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.042 09:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.042 09:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.042 09:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.042 09:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.042 09:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.042 09:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.042 09:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.042 09:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.042 09:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.042 09:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.042 09:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.042 09:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.042 09:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.042 09:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.042 09:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.042 09:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.042 09:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.042 09:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.042 09:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.042 09:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.042 09:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.042 09:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.042 09:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.042 09:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.042 09:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.042 09:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.042 09:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.042 09:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.042 09:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.042 09:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.042 09:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.042 09:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.042 09:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.042 09:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.042 09:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.042 09:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.042 09:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.042 09:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.042 09:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.042 09:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.042 09:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.043 09:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.043 09:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.043 09:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.043 09:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.043 09:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.043 09:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.043 09:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.043 09:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.043 09:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.043 09:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.043 09:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.043 09:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.043 09:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.043 09:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.043 09:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.043 09:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.043 09:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.043 09:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.043 09:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.043 09:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.043 09:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.043 09:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.043 09:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.043 09:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.043 09:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.043 09:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.043 09:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.043 09:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.043 09:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.043 09:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.043 09:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.043 09:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.043 09:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.043 09:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.043 09:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.043 09:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.043 09:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.043 09:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.043 09:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.043 09:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.043 09:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.043 09:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.043 09:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.043 09:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.043 09:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.043 09:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.043 09:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.043 09:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.043 09:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.043 09:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.043 09:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.043 09:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.043 09:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.043 09:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.043 09:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.043 09:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.043 09:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.043 09:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.043 09:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.043 09:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.043 09:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.043 09:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.043 09:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.043 09:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.043 09:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.043 09:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.043 09:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.043 09:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.043 09:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.043 09:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.043 09:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.043 09:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.043 09:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.043 09:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.043 09:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.043 09:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.043 09:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.043 09:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.043 09:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.043 09:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.043 09:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.043 09:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.043 09:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.043 09:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.043 09:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.043 09:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.043 09:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.043 09:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.043 09:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.043 09:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.043 09:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.043 09:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.043 09:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.043 09:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.043 09:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.043 09:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.043 09:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.043 09:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.043 09:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.043 09:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.043 09:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.043 09:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.043 09:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.043 09:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.043 09:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.043 09:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.043 09:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.043 09:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.043 09:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.043 09:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.043 09:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.043 09:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.043 09:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.043 09:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.043 09:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.043 09:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.043 09:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.043 09:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.043 09:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.043 09:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.043 09:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.043 09:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.043 09:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.043 09:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.043 09:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.043 09:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.043 09:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.043 09:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.043 09:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.043 09:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.043 09:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.043 09:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.043 09:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.043 09:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.044 09:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.044 09:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.044 09:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.044 09:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.044 09:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.044 09:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.044 09:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.044 09:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.044 09:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.044 09:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.044 09:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.044 09:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.044 09:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.044 09:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.044 09:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.044 09:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.044 09:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.044 09:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.044 09:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.044 09:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.044 09:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.044 09:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.044 09:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.044 09:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.044 09:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.044 09:19:17 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:04:06.044 09:19:17 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:04:06.044 09:19:17 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:04:06.044 09:19:17 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:04:06.044 09:19:17 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:04:06.044 09:19:17 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:04:06.044 09:19:17 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:04:06.044 09:19:17 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:04:06.044 09:19:17 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:04:06.044 09:19:17 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:04:06.044 09:19:17 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:04:06.044 09:19:17 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:06.044 09:19:17 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:04:06.044 09:19:17 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:06.044 09:19:17 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:06.044 09:19:17 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:06.044 09:19:17 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:06.044 09:19:17 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:04:06.044 09:19:17 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:06.044 09:19:17 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:06.044 09:19:17 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:06.044 09:19:17 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:06.044 09:19:17 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:06.044 09:19:17 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:06.044 09:19:17 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:06.044 09:19:17 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:06.044 09:19:17 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:06.044 09:19:17 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:06.044 09:19:17 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:06.044 09:19:17 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:06.044 09:19:17 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:06.044 09:19:17 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:04:06.044 09:19:17 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:06.044 09:19:17 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:06.044 09:19:17 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:06.044 ************************************ 00:04:06.044 START TEST default_setup 00:04:06.044 ************************************ 00:04:06.044 09:19:17 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1125 -- # default_setup 00:04:06.044 09:19:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:04:06.044 09:19:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:04:06.044 09:19:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:06.044 09:19:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:04:06.044 09:19:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:06.044 09:19:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:04:06.044 09:19:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:06.044 09:19:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:06.044 09:19:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:06.044 09:19:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:06.044 09:19:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:04:06.044 09:19:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:06.044 09:19:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:06.044 09:19:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:06.044 09:19:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:06.044 09:19:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:06.044 09:19:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:06.044 09:19:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:06.044 09:19:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:04:06.044 09:19:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:04:06.044 09:19:17 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:04:06.044 09:19:17 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:04:08.582 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:08.582 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:08.582 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:08.582 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:08.583 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:08.583 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:08.583 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:08.583 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:08.583 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:08.583 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:08.583 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:08.583 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:08.583 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:08.583 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:08.583 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:08.583 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:10.487 0000:dd:00.0 (8086 0a54): nvme -> vfio-pci 00:04:10.487 0000:df:00.0 (8086 0a54): nvme -> vfio-pci 00:04:10.487 0000:de:00.0 (8086 0953): nvme -> vfio-pci 00:04:10.755 0000:dc:00.0 (8086 0953): nvme -> vfio-pci 00:04:10.755 09:19:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:04:10.755 09:19:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:04:10.755 09:19:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:04:10.755 09:19:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:04:10.755 09:19:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:04:10.755 09:19:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:04:10.755 09:19:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:04:10.755 09:19:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:10.755 09:19:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:10.755 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:10.755 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:10.755 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:10.755 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:10.755 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:10.755 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:10.755 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:10.755 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:10.755 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:10.755 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.755 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.755 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60294968 kB' 'MemFree: 44170780 kB' 'MemAvailable: 47641032 kB' 'Buffers: 12144 kB' 'Cached: 10858936 kB' 'SwapCached: 0 kB' 'Active: 8317044 kB' 'Inactive: 3449392 kB' 'Active(anon): 7897492 kB' 'Inactive(anon): 0 kB' 'Active(file): 419552 kB' 'Inactive(file): 3449392 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 898240 kB' 'Mapped: 154932 kB' 'Shmem: 7002136 kB' 'KReclaimable: 196488 kB' 'Slab: 590888 kB' 'SReclaimable: 196488 kB' 'SUnreclaim: 394400 kB' 'KernelStack: 19568 kB' 'PageTables: 9628 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487512 kB' 'Committed_AS: 9756956 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 208392 kB' 'VmallocChunk: 0 kB' 'Percpu: 62208 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 498644 kB' 'DirectMap2M: 12812288 kB' 'DirectMap1G: 55574528 kB' 00:04:10.755 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.755 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.755 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.755 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.755 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.755 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.755 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.755 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.755 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.755 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.755 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.755 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.755 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.755 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.755 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.755 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.755 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.755 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.755 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.755 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.755 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.755 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.755 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.755 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.755 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.755 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.755 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.755 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.755 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.755 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.755 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.755 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.755 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.755 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.755 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.755 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.755 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.755 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.755 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.755 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.755 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.755 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.755 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.755 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.755 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.755 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.755 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.755 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.755 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.755 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.755 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.755 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60294968 kB' 'MemFree: 44169072 kB' 'MemAvailable: 47639324 kB' 'Buffers: 12144 kB' 'Cached: 10858936 kB' 'SwapCached: 0 kB' 'Active: 8317704 kB' 'Inactive: 3449392 kB' 'Active(anon): 7898152 kB' 'Inactive(anon): 0 kB' 'Active(file): 419552 kB' 'Inactive(file): 3449392 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 898920 kB' 'Mapped: 154932 kB' 'Shmem: 7002136 kB' 'KReclaimable: 196488 kB' 'Slab: 590912 kB' 'SReclaimable: 196488 kB' 'SUnreclaim: 394424 kB' 'KernelStack: 19584 kB' 'PageTables: 9908 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487512 kB' 'Committed_AS: 9756976 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 208376 kB' 'VmallocChunk: 0 kB' 'Percpu: 62208 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 498644 kB' 'DirectMap2M: 12812288 kB' 'DirectMap1G: 55574528 kB' 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.756 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.757 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.757 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.757 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.757 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.757 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.757 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.757 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.757 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.757 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.757 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.757 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.757 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.757 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.757 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.757 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.757 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.757 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.757 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.757 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.757 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.757 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.757 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.757 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.757 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.757 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.757 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.757 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.757 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.757 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.757 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.757 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.757 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.757 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.757 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.757 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.757 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.757 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.757 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.757 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.757 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.757 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.757 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.757 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.757 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.757 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.757 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.757 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.757 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.757 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.757 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.757 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.757 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.757 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.757 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.757 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.757 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.757 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.757 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.757 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.757 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.757 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.757 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.757 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.757 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.757 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.757 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.757 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.757 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.757 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.757 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.757 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.757 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.757 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.757 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.757 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.757 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.757 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.757 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.757 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.757 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.757 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.757 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.757 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.757 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.757 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.757 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.757 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.757 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.757 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.757 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.757 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.757 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.757 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.757 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.757 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.757 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.757 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.757 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.757 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.757 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.757 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.757 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.757 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.757 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.757 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.757 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.757 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.757 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.757 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.757 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.757 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.757 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.757 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:10.757 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:10.757 09:19:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:04:10.757 09:19:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:10.757 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:11.021 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:11.021 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:11.021 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:11.021 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:11.021 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:11.021 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:11.021 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:11.021 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:11.021 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.021 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.021 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60294968 kB' 'MemFree: 44167004 kB' 'MemAvailable: 47637256 kB' 'Buffers: 12144 kB' 'Cached: 10858956 kB' 'SwapCached: 0 kB' 'Active: 8317688 kB' 'Inactive: 3449392 kB' 'Active(anon): 7898136 kB' 'Inactive(anon): 0 kB' 'Active(file): 419552 kB' 'Inactive(file): 3449392 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 899368 kB' 'Mapped: 154856 kB' 'Shmem: 7002156 kB' 'KReclaimable: 196488 kB' 'Slab: 591096 kB' 'SReclaimable: 196488 kB' 'SUnreclaim: 394608 kB' 'KernelStack: 19568 kB' 'PageTables: 9848 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487512 kB' 'Committed_AS: 9756996 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 208344 kB' 'VmallocChunk: 0 kB' 'Percpu: 62208 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 498644 kB' 'DirectMap2M: 12812288 kB' 'DirectMap1G: 55574528 kB' 00:04:11.021 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.021 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.021 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.021 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.021 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.021 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.021 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.021 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.021 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.021 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.021 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.021 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.021 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.021 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.021 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.021 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.021 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.021 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.021 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.021 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.021 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.021 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.021 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.021 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.021 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.021 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.021 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.021 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.021 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.021 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.021 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.021 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.021 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.021 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.021 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.021 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.021 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.021 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.021 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.021 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.021 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.021 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.021 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.021 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.021 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.021 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.021 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.021 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.021 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.021 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.021 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.021 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.021 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.021 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.021 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.021 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.021 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.021 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.021 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.021 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.021 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.021 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.021 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.021 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.021 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.021 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.021 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.021 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.021 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.021 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.021 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.021 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.021 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.021 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.021 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.021 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.021 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.021 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.021 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.021 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.021 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.021 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.021 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.021 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.021 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.021 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.021 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.022 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.022 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.022 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.022 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.022 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.022 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.022 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.022 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.022 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.022 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.022 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.022 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.022 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.022 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.022 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.022 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.022 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.022 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.022 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.022 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.022 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.022 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.022 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.022 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.022 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.022 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.022 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.022 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.022 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.022 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.022 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.022 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.022 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.022 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.022 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.022 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.022 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.022 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.022 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.022 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.022 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.022 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.022 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.022 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.022 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.022 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.022 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.022 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.022 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.022 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.022 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.022 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.022 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.022 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.022 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.022 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.022 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.022 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.022 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.022 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.022 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.022 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.022 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.022 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.022 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.022 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.022 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.022 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.022 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.022 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.022 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.022 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.022 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.022 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.022 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.022 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.022 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.022 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.022 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.022 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.022 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.022 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.022 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.022 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.022 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.022 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.022 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.022 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.022 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.022 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.022 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.022 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.022 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.022 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.022 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.022 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.022 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.022 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.022 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.022 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.022 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.022 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.022 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.022 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.022 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.022 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.022 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.022 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.022 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.022 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.022 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.022 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.022 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.022 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.022 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:11.022 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:11.022 09:19:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:04:11.023 09:19:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:11.023 nr_hugepages=1024 00:04:11.023 09:19:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:11.023 resv_hugepages=0 00:04:11.023 09:19:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:11.023 surplus_hugepages=0 00:04:11.023 09:19:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:11.023 anon_hugepages=0 00:04:11.023 09:19:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:11.023 09:19:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:11.023 09:19:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:11.023 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:11.023 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:11.023 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:11.023 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:11.023 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:11.023 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:11.023 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:11.023 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:11.023 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:11.023 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.023 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.023 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60294968 kB' 'MemFree: 44168880 kB' 'MemAvailable: 47639132 kB' 'Buffers: 12144 kB' 'Cached: 10858980 kB' 'SwapCached: 0 kB' 'Active: 8317224 kB' 'Inactive: 3449392 kB' 'Active(anon): 7897672 kB' 'Inactive(anon): 0 kB' 'Active(file): 419552 kB' 'Inactive(file): 3449392 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 898832 kB' 'Mapped: 154856 kB' 'Shmem: 7002180 kB' 'KReclaimable: 196488 kB' 'Slab: 591096 kB' 'SReclaimable: 196488 kB' 'SUnreclaim: 394608 kB' 'KernelStack: 19504 kB' 'PageTables: 9900 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487512 kB' 'Committed_AS: 9757020 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 208424 kB' 'VmallocChunk: 0 kB' 'Percpu: 62208 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 498644 kB' 'DirectMap2M: 12812288 kB' 'DirectMap1G: 55574528 kB' 00:04:11.023 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.023 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.023 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.023 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.023 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.023 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.023 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.023 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.023 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.023 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.023 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.023 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.023 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.023 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.023 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.023 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.023 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.023 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.023 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.023 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.023 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.023 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.023 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.023 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.023 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.023 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.023 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.023 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.023 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.023 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.023 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.023 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.023 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.023 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.023 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.023 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.023 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.023 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.023 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.023 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.023 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.023 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.023 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.023 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.023 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.023 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.023 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.023 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.023 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.023 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.023 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.023 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.023 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.023 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.023 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.023 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.023 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.023 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.023 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.023 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.023 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.023 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.023 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.023 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.023 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.023 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.023 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.023 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.023 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.023 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.023 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.023 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.023 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.023 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.023 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.023 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.023 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.023 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.023 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.023 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.023 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.023 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.023 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.023 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.023 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.024 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.024 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.024 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.024 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.024 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.024 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.024 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.024 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.024 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.024 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.024 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.024 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.024 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.024 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.024 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.024 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.024 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.024 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.024 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.024 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.024 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.024 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.024 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.024 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.024 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.024 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.024 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.024 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.024 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.024 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.024 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.024 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.024 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.024 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.024 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.024 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.024 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.024 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.024 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.024 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.024 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.024 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.024 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.024 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.024 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.024 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.024 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.024 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.024 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.024 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.024 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.024 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.024 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.024 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.024 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.024 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.024 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.024 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.024 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.024 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.024 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.024 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.024 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.024 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.024 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.024 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.024 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.024 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.024 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.024 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.024 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.024 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.024 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.024 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.024 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.024 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.024 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.024 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.024 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.024 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.024 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.024 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.024 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.024 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.024 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.024 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.024 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.024 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.024 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.024 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.024 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.024 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.024 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.024 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.024 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.024 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.024 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.024 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.024 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.024 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.024 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.024 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.024 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.024 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.024 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.024 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.024 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.024 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.024 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:04:11.024 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:11.024 09:19:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:11.024 09:19:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:04:11.024 09:19:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:04:11.024 09:19:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:11.024 09:19:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:11.024 09:19:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:11.024 09:19:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:11.024 09:19:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:11.024 09:19:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:11.024 09:19:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:11.025 09:19:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:11.025 09:19:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:11.025 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:11.025 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:04:11.025 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:11.025 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:11.025 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:11.025 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:11.025 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:11.025 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:11.025 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:11.025 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.025 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.025 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32588596 kB' 'MemFree: 26537828 kB' 'MemUsed: 6050768 kB' 'SwapCached: 0 kB' 'Active: 2527144 kB' 'Inactive: 63960 kB' 'Active(anon): 2340812 kB' 'Inactive(anon): 0 kB' 'Active(file): 186332 kB' 'Inactive(file): 63960 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1954616 kB' 'Mapped: 86620 kB' 'AnonPages: 639788 kB' 'Shmem: 1704324 kB' 'KernelStack: 10152 kB' 'PageTables: 3744 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 89384 kB' 'Slab: 307600 kB' 'SReclaimable: 89384 kB' 'SUnreclaim: 218216 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:11.025 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.025 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.025 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.025 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.025 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.025 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.025 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.025 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.025 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.025 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.025 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.025 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.025 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.025 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.025 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.025 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.025 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.025 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.025 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.025 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.025 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.025 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.025 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.025 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.025 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.025 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.025 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.025 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.025 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.025 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.025 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.025 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.025 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.025 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.025 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.025 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.025 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.025 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.025 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.025 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.025 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.025 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.025 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.025 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.025 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.025 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.025 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.025 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.025 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.025 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.025 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.025 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.025 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.025 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.025 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.025 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.025 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.025 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.025 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.025 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.025 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.025 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.025 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.025 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.025 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.025 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.025 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.025 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.025 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.025 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.025 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.026 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.026 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.026 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.026 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.026 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.026 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.026 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.026 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.026 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.026 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.026 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.026 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.026 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.026 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.026 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.026 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.026 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.026 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.026 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.026 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.026 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.026 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.026 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.026 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.026 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.026 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.026 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.026 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.026 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.026 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.026 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.026 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.026 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.026 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.026 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.026 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.026 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.026 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.026 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.026 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.026 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.026 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.026 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.026 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.026 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.026 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.026 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.026 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.026 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.026 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.026 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.026 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.026 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.026 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.026 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.026 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.026 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.026 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.026 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.026 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.026 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.026 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.026 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.026 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.026 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.026 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.026 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.026 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.026 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.026 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.026 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.026 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.026 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.026 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.026 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:11.026 09:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:11.026 09:19:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:11.026 09:19:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:11.026 09:19:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:11.026 09:19:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:11.026 09:19:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:11.026 node0=1024 expecting 1024 00:04:11.026 09:19:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:11.026 00:04:11.026 real 0m5.707s 00:04:11.026 user 0m1.464s 00:04:11.026 sys 0m2.228s 00:04:11.026 09:19:23 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:11.026 09:19:23 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:04:11.026 ************************************ 00:04:11.026 END TEST default_setup 00:04:11.026 ************************************ 00:04:11.026 09:19:23 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:04:11.026 09:19:23 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:11.026 09:19:23 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:11.026 09:19:23 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:11.026 ************************************ 00:04:11.026 START TEST per_node_1G_alloc 00:04:11.026 ************************************ 00:04:11.026 09:19:23 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1125 -- # per_node_1G_alloc 00:04:11.026 09:19:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:04:11.026 09:19:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:04:11.026 09:19:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:11.026 09:19:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:04:11.026 09:19:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:04:11.026 09:19:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:04:11.026 09:19:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:11.026 09:19:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:11.026 09:19:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:11.026 09:19:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:04:11.026 09:19:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:04:11.026 09:19:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:11.026 09:19:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:11.026 09:19:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:11.026 09:19:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:11.026 09:19:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:11.026 09:19:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:04:11.026 09:19:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:11.026 09:19:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:11.026 09:19:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:11.026 09:19:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:11.026 09:19:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:11.026 09:19:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:04:11.026 09:19:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:04:11.026 09:19:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:04:11.026 09:19:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:11.027 09:19:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:04:14.323 0000:dd:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:14.323 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:14.323 0000:df:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:14.323 0000:de:00.0 (8086 0953): Already using the vfio-pci driver 00:04:14.323 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:14.323 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:14.323 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:14.323 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:14.323 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:14.323 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:14.323 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:14.323 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:14.323 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:14.323 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:14.323 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:14.323 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:14.323 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:14.323 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:14.323 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:14.323 0000:dc:00.0 (8086 0953): Already using the vfio-pci driver 00:04:14.323 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:04:14.323 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:04:14.323 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:14.323 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:14.323 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:14.323 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:14.323 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:14.323 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:14.323 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:14.323 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:14.323 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:14.323 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:14.323 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:14.323 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:14.323 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:14.323 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:14.323 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:14.323 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:14.323 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:14.323 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.323 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.323 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60294968 kB' 'MemFree: 44190748 kB' 'MemAvailable: 47661000 kB' 'Buffers: 12144 kB' 'Cached: 10859100 kB' 'SwapCached: 0 kB' 'Active: 8318636 kB' 'Inactive: 3449392 kB' 'Active(anon): 7899084 kB' 'Inactive(anon): 0 kB' 'Active(file): 419552 kB' 'Inactive(file): 3449392 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 899544 kB' 'Mapped: 155024 kB' 'Shmem: 7002300 kB' 'KReclaimable: 196488 kB' 'Slab: 591136 kB' 'SReclaimable: 196488 kB' 'SUnreclaim: 394648 kB' 'KernelStack: 19280 kB' 'PageTables: 9140 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487512 kB' 'Committed_AS: 9754904 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 208264 kB' 'VmallocChunk: 0 kB' 'Percpu: 62208 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 498644 kB' 'DirectMap2M: 12812288 kB' 'DirectMap1G: 55574528 kB' 00:04:14.323 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.323 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.323 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.323 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.323 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.323 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.323 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.323 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.323 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.323 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.323 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.323 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.323 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.323 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.323 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.323 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.323 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.323 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.323 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.323 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.323 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.323 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.323 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.323 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.323 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.323 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.323 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.323 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.323 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.323 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.323 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.323 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.323 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.323 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.323 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.323 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.323 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.323 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.323 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.323 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.323 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.323 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.323 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.323 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.323 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.323 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.323 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.323 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.323 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.324 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.324 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.324 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.324 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.324 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.324 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.324 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.324 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.324 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.324 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.324 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.324 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.324 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.324 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.324 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.324 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.324 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.324 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.324 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.324 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.324 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.324 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.324 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.324 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.324 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.324 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.324 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.324 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.324 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.324 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.324 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.324 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.324 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.324 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.324 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.324 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.324 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.324 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.324 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.324 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.324 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.324 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.324 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.324 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.324 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.324 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.324 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.324 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.324 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.324 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.324 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.324 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.324 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.324 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.324 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.324 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.324 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.324 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.324 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.324 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.324 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.324 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.324 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.324 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.324 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.324 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.324 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.324 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.324 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.324 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.324 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.324 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.324 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.324 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.324 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.324 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.324 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.324 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.324 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.324 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.324 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.324 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.324 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.324 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.324 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.324 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.324 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.324 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.324 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.324 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.324 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.324 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.324 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.324 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.324 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.324 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.324 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.324 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.324 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.324 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.324 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.324 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.324 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.324 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.324 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.324 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.324 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.324 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.324 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.324 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.324 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.324 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.324 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:14.324 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:14.324 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:14.324 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:14.324 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:14.324 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:14.325 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:14.325 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:14.325 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:14.325 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:14.325 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:14.325 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:14.325 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:14.325 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.325 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.325 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60294968 kB' 'MemFree: 44191148 kB' 'MemAvailable: 47661400 kB' 'Buffers: 12144 kB' 'Cached: 10859104 kB' 'SwapCached: 0 kB' 'Active: 8318376 kB' 'Inactive: 3449392 kB' 'Active(anon): 7898824 kB' 'Inactive(anon): 0 kB' 'Active(file): 419552 kB' 'Inactive(file): 3449392 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 899828 kB' 'Mapped: 154948 kB' 'Shmem: 7002304 kB' 'KReclaimable: 196488 kB' 'Slab: 591104 kB' 'SReclaimable: 196488 kB' 'SUnreclaim: 394616 kB' 'KernelStack: 19280 kB' 'PageTables: 9132 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487512 kB' 'Committed_AS: 9754920 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 208232 kB' 'VmallocChunk: 0 kB' 'Percpu: 62208 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 498644 kB' 'DirectMap2M: 12812288 kB' 'DirectMap1G: 55574528 kB' 00:04:14.325 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.325 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.325 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.325 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.325 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.325 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.325 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.325 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.325 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.325 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.325 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.325 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.325 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.325 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.325 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.325 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.325 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.325 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.325 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.325 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.325 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.325 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.325 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.325 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.325 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.325 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.325 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.325 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.325 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.325 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.325 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.325 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.325 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.325 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.325 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.325 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.325 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.325 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.325 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.325 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.325 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.325 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.325 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.325 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.325 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.325 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.325 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.325 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.325 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.325 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.325 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.325 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.325 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.325 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.325 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.325 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.325 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.325 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.325 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.325 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.325 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.325 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.325 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.325 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.325 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.325 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.325 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.325 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.325 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.325 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.325 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.325 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.325 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.325 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.325 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.325 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.325 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.325 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.325 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.325 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.325 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.325 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.325 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.325 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.325 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.325 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.325 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.325 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.325 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.325 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.325 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.325 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.325 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.325 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.326 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.326 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.326 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.326 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.326 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.326 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.326 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.326 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.326 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.326 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.326 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.326 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.326 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.326 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.326 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.326 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.326 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.326 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.326 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.326 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.326 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.326 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.326 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.326 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.326 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.326 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.326 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.326 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.326 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.326 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.326 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.326 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.326 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.326 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.326 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.326 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.326 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.326 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.326 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.326 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.326 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.326 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.326 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.326 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.326 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.326 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.326 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.326 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.326 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.326 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.326 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.326 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.326 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.326 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.326 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.326 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.326 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.326 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.326 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.326 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.326 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.326 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.326 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.326 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.326 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.326 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.326 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.326 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.326 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.326 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.326 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.326 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.326 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.326 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.326 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.326 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.326 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.326 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.326 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.326 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.326 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.326 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.326 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.326 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.326 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.326 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.326 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.326 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.326 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.326 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.326 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.326 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.326 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.326 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.326 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.326 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.326 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.326 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.326 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.326 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.326 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.326 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.326 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.326 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.326 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.326 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.326 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.326 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.326 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.326 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.326 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.326 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:14.326 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:14.326 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:14.326 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:14.326 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:14.326 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:14.326 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:14.327 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:14.327 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:14.327 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:14.327 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:14.327 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:14.327 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:14.327 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.327 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.327 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60294968 kB' 'MemFree: 44191752 kB' 'MemAvailable: 47662004 kB' 'Buffers: 12144 kB' 'Cached: 10859124 kB' 'SwapCached: 0 kB' 'Active: 8318236 kB' 'Inactive: 3449392 kB' 'Active(anon): 7898684 kB' 'Inactive(anon): 0 kB' 'Active(file): 419552 kB' 'Inactive(file): 3449392 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 899640 kB' 'Mapped: 154948 kB' 'Shmem: 7002324 kB' 'KReclaimable: 196488 kB' 'Slab: 591104 kB' 'SReclaimable: 196488 kB' 'SUnreclaim: 394616 kB' 'KernelStack: 19264 kB' 'PageTables: 9084 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487512 kB' 'Committed_AS: 9754944 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 208232 kB' 'VmallocChunk: 0 kB' 'Percpu: 62208 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 498644 kB' 'DirectMap2M: 12812288 kB' 'DirectMap1G: 55574528 kB' 00:04:14.327 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.327 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.327 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.327 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.327 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.327 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.327 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.327 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.327 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.327 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.327 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.327 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.327 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.327 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.327 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.327 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.327 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.327 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.327 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.327 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.327 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.327 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.327 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.327 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.327 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.327 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.327 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.327 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.327 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.327 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.327 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.327 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.327 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.327 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.327 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.327 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.327 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.327 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.327 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.327 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.327 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.327 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.327 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.327 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.327 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.327 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.327 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.327 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.327 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.327 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.327 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.327 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.327 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.327 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.327 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.327 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.327 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.327 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.327 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.327 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.327 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.327 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.327 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.327 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.327 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.327 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.327 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.327 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.327 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.327 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.327 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.327 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.327 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.328 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.328 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.328 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.328 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.328 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.328 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.328 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.328 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.328 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.328 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.328 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.328 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.328 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.328 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.328 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.328 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.328 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.328 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.328 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.328 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.328 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.328 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.328 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.328 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.328 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.328 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.328 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.328 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.328 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.328 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.328 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.328 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.328 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.328 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.328 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.328 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.328 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.328 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.328 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.328 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.328 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.328 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.328 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.328 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.328 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.328 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.328 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.328 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.328 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.328 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.328 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.328 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.328 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.328 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.328 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.328 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.328 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.328 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.328 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.328 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.328 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.328 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.328 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.328 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.328 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.328 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.328 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.328 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.328 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.328 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.328 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.328 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.328 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.328 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.328 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.328 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.328 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.328 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.328 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.328 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.328 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.328 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.328 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.328 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.328 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.328 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.328 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.328 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.328 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.328 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.328 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.328 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.328 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.328 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.328 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.328 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.328 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.328 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.328 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.328 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.328 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.328 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.328 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.328 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.328 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.328 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.328 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.328 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.328 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.328 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.328 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.328 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.328 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.328 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.328 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.328 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.328 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.328 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.329 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.329 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.329 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.329 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.329 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.329 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.329 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.329 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.329 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.329 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.329 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:14.329 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:14.329 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:14.329 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:14.329 nr_hugepages=1024 00:04:14.329 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:14.329 resv_hugepages=0 00:04:14.329 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:14.329 surplus_hugepages=0 00:04:14.329 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:14.329 anon_hugepages=0 00:04:14.329 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:14.329 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:14.329 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:14.329 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:14.329 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:14.329 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:14.329 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:14.329 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:14.329 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:14.329 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:14.329 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:14.329 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:14.329 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.329 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.329 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60294968 kB' 'MemFree: 44190996 kB' 'MemAvailable: 47661248 kB' 'Buffers: 12144 kB' 'Cached: 10859164 kB' 'SwapCached: 0 kB' 'Active: 8318432 kB' 'Inactive: 3449392 kB' 'Active(anon): 7898880 kB' 'Inactive(anon): 0 kB' 'Active(file): 419552 kB' 'Inactive(file): 3449392 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 899788 kB' 'Mapped: 154948 kB' 'Shmem: 7002364 kB' 'KReclaimable: 196488 kB' 'Slab: 591104 kB' 'SReclaimable: 196488 kB' 'SUnreclaim: 394616 kB' 'KernelStack: 19280 kB' 'PageTables: 9128 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487512 kB' 'Committed_AS: 9754964 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 208232 kB' 'VmallocChunk: 0 kB' 'Percpu: 62208 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 498644 kB' 'DirectMap2M: 12812288 kB' 'DirectMap1G: 55574528 kB' 00:04:14.329 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.329 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.329 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.329 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.329 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.329 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.329 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.329 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.329 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.329 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.329 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.329 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.329 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.329 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.329 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.329 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.329 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.329 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.329 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.329 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.329 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.329 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.329 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.329 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.329 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.329 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.329 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.329 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.329 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.329 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.329 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.329 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.329 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.329 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.329 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.329 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.329 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.329 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.329 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.329 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.329 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.329 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.329 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.329 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.329 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.329 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.329 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.329 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.329 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.329 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.329 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.329 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.329 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.329 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.329 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.329 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.329 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.329 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.329 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.329 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.329 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.329 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.329 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.329 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.329 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.329 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.329 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.329 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.329 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.329 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.330 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.330 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.330 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.330 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.330 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.330 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.330 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.330 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.330 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.330 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.330 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.330 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.330 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.330 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.330 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.330 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.330 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.330 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.330 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.330 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.330 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.330 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.330 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.330 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.330 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.330 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.330 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.330 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.330 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.330 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.330 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.330 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.330 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.330 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.330 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.330 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.330 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.330 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.330 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.330 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.330 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.330 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.330 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.330 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.330 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.330 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.330 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.330 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.330 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.330 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.330 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.330 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.330 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.330 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.330 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.330 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.330 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.330 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.330 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.330 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.330 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.330 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.330 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.330 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.330 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.330 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.330 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.330 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.330 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.330 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.330 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.330 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.330 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.330 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.330 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.330 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.330 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.330 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.330 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.330 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.330 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.330 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.330 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.330 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.330 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.330 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.330 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.330 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.330 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.330 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.330 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.330 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.330 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.330 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.330 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.330 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.330 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.330 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.330 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.330 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.330 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.330 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.330 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.330 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.330 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.330 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.330 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.330 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.330 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.330 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.330 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.330 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.330 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.330 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.330 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.330 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.330 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.330 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.331 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.331 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.331 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.331 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.331 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.331 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:14.331 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:14.331 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:14.331 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:14.331 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:14.331 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:14.331 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:14.331 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:14.331 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:14.331 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:14.331 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:14.331 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:14.331 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:14.331 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:14.331 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:14.331 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:04:14.331 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:14.331 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:14.331 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:14.331 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:14.331 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:14.331 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:14.331 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:14.331 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.331 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.331 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32588596 kB' 'MemFree: 27603604 kB' 'MemUsed: 4984992 kB' 'SwapCached: 0 kB' 'Active: 2528360 kB' 'Inactive: 63960 kB' 'Active(anon): 2342028 kB' 'Inactive(anon): 0 kB' 'Active(file): 186332 kB' 'Inactive(file): 63960 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1954668 kB' 'Mapped: 86712 kB' 'AnonPages: 640836 kB' 'Shmem: 1704376 kB' 'KernelStack: 10232 kB' 'PageTables: 3972 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 89384 kB' 'Slab: 308148 kB' 'SReclaimable: 89384 kB' 'SUnreclaim: 218764 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:14.331 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.331 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.331 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.331 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.331 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.331 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.331 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.331 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.331 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.331 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.331 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.331 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.331 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.331 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.331 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.331 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.331 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.331 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.331 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.331 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.331 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.331 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.331 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.331 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.331 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.331 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.331 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.331 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.331 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.331 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.331 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.331 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.331 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.331 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.331 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.331 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.331 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.331 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.331 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.331 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.331 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.331 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.331 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.331 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.331 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.331 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.331 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.331 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.331 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.331 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.331 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.331 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.331 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.331 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.331 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.331 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.331 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.331 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.331 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.331 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.331 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.331 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.331 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.331 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.331 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.331 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.331 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.331 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.331 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.331 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.331 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.331 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.331 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.331 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.331 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.331 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.331 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.331 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.332 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.332 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.332 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.332 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.332 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.332 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.332 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.332 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.332 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.332 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.332 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.332 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.332 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.332 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.332 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.332 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.332 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.332 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.332 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.332 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.332 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.332 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.332 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.332 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.332 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.332 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.332 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.332 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.332 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.332 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.332 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.332 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.332 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.332 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.332 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.332 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.332 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.332 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.332 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.332 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.332 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.332 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.332 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.332 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.332 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.332 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.332 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.332 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.332 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.332 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.332 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.332 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.332 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.332 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.332 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.332 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.332 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.332 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.332 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.332 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.332 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.332 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.332 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.332 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.332 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.332 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.332 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.332 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:14.332 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:14.332 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:14.332 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:14.332 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:14.332 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:14.332 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:14.332 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:04:14.332 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:14.332 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:14.332 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:14.332 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:14.332 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:14.332 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:14.332 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:14.332 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.332 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.332 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27706372 kB' 'MemFree: 16586636 kB' 'MemUsed: 11119736 kB' 'SwapCached: 0 kB' 'Active: 5790100 kB' 'Inactive: 3385432 kB' 'Active(anon): 5556880 kB' 'Inactive(anon): 0 kB' 'Active(file): 233220 kB' 'Inactive(file): 3385432 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8916664 kB' 'Mapped: 68236 kB' 'AnonPages: 258996 kB' 'Shmem: 5298012 kB' 'KernelStack: 9048 kB' 'PageTables: 5160 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 107104 kB' 'Slab: 282956 kB' 'SReclaimable: 107104 kB' 'SUnreclaim: 175852 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:14.332 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.332 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.332 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.332 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.332 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.332 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.332 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.333 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.333 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.333 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.333 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.333 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.333 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.333 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.333 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.333 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.333 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.333 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.333 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.333 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.333 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.333 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.333 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.333 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.333 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.333 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.333 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.333 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.333 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.333 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.333 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.333 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.333 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.333 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.333 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.333 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.333 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.333 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.333 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.333 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.333 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.333 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.333 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.333 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.333 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.333 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.333 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.333 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.333 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.333 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.333 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.333 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.333 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.333 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.333 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.333 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.333 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.333 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.333 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.333 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.333 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.333 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.333 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.333 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.333 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.333 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.333 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.333 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.333 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.333 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.333 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.333 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.333 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.333 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.333 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.333 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.333 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.333 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.333 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.333 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.333 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.333 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.333 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.333 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.333 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.333 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.333 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.333 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.333 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.333 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.333 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.333 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.333 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.333 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.333 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.333 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.333 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.333 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.333 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.333 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.333 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.333 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.333 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.333 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.333 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.333 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.333 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.333 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.333 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.333 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.333 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.333 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.333 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.333 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.333 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.333 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.333 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.333 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.333 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.333 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.333 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.333 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.333 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.333 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.333 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.334 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.334 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.334 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.334 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.334 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.334 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.334 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.334 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.334 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.334 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.334 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.334 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.334 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.334 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.334 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.334 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.334 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.334 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.334 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.334 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.334 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:14.334 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:14.334 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:14.334 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:14.334 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:14.334 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:14.334 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:14.334 node0=512 expecting 512 00:04:14.334 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:14.334 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:14.334 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:14.334 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:14.334 node1=512 expecting 512 00:04:14.334 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:14.334 00:04:14.334 real 0m3.146s 00:04:14.334 user 0m1.171s 00:04:14.334 sys 0m2.002s 00:04:14.334 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:14.334 09:19:26 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:14.334 ************************************ 00:04:14.334 END TEST per_node_1G_alloc 00:04:14.334 ************************************ 00:04:14.334 09:19:26 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:04:14.334 09:19:26 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:14.334 09:19:26 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:14.334 09:19:26 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:14.334 ************************************ 00:04:14.334 START TEST even_2G_alloc 00:04:14.334 ************************************ 00:04:14.334 09:19:26 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1125 -- # even_2G_alloc 00:04:14.334 09:19:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:04:14.334 09:19:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:14.334 09:19:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:14.334 09:19:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:14.334 09:19:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:14.334 09:19:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:14.334 09:19:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:14.334 09:19:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:14.334 09:19:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:14.334 09:19:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:14.334 09:19:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:14.334 09:19:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:14.334 09:19:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:14.334 09:19:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:14.334 09:19:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:14.334 09:19:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:14.334 09:19:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:04:14.334 09:19:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:14.334 09:19:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:14.334 09:19:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:14.334 09:19:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:14.334 09:19:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:14.334 09:19:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:14.334 09:19:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:04:14.334 09:19:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:04:14.334 09:19:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:04:14.334 09:19:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:14.334 09:19:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:04:17.633 0000:dd:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:17.633 0000:df:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:17.633 0000:de:00.0 (8086 0953): Already using the vfio-pci driver 00:04:17.633 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:17.633 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:17.633 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:17.633 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:17.633 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:17.633 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:17.633 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:17.633 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:17.633 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:17.633 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:17.633 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:17.633 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:17.633 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:17.633 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:17.633 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:17.633 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:17.633 0000:dc:00.0 (8086 0953): Already using the vfio-pci driver 00:04:17.633 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:04:17.633 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:17.633 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:17.633 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:17.633 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:17.633 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:17.633 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:17.633 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:17.633 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:17.633 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:17.633 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:17.633 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:17.633 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:17.633 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:17.633 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:17.633 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:17.634 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:17.634 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:17.634 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.634 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.634 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60294968 kB' 'MemFree: 44213024 kB' 'MemAvailable: 47683272 kB' 'Buffers: 12144 kB' 'Cached: 10859272 kB' 'SwapCached: 0 kB' 'Active: 8313584 kB' 'Inactive: 3449392 kB' 'Active(anon): 7894032 kB' 'Inactive(anon): 0 kB' 'Active(file): 419552 kB' 'Inactive(file): 3449392 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 894824 kB' 'Mapped: 154292 kB' 'Shmem: 7002472 kB' 'KReclaimable: 196480 kB' 'Slab: 590460 kB' 'SReclaimable: 196480 kB' 'SUnreclaim: 393980 kB' 'KernelStack: 19168 kB' 'PageTables: 8592 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487512 kB' 'Committed_AS: 9732176 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 208120 kB' 'VmallocChunk: 0 kB' 'Percpu: 62208 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 498644 kB' 'DirectMap2M: 12812288 kB' 'DirectMap1G: 55574528 kB' 00:04:17.634 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.634 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.634 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.634 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.634 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.634 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.634 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.634 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.634 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.634 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.634 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.634 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.634 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.634 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.634 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.634 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.634 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.634 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.634 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.634 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.634 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.634 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.634 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.634 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.634 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.634 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.634 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.634 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.634 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.634 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.634 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.634 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.634 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.634 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.634 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.634 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.634 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.634 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.634 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.634 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.634 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.634 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.634 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.634 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.634 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.634 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.634 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.634 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.634 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.634 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.634 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.634 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.634 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.634 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.634 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.634 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.634 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.634 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.634 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.634 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.634 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.634 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.634 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.634 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.634 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.634 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.634 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.634 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.634 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.634 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.634 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.634 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.634 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.634 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.634 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.634 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.634 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.634 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.634 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.634 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.634 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.634 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.634 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.634 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.634 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.634 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.634 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.634 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.634 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.634 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.634 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.634 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.634 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.634 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.634 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.634 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.634 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.634 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.634 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.634 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.634 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.634 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.634 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.635 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.635 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.635 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.635 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.635 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.635 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.635 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.635 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.635 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.635 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.635 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.635 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.635 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.635 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.635 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.635 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.635 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.635 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.635 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.635 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.635 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.635 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.635 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.635 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.635 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.635 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.635 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.635 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.635 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.635 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.635 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.635 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.635 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.635 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.635 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.635 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.635 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.635 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.635 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.635 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.635 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.635 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.635 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.635 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.635 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.635 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.635 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.635 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.635 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.635 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.635 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.635 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.635 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.635 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.635 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.635 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.635 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.635 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.635 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:17.635 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:17.635 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:17.635 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:17.635 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:17.635 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:17.635 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:17.635 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:17.635 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:17.635 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:17.635 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:17.635 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:17.635 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:17.635 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.635 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.635 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60294968 kB' 'MemFree: 44213404 kB' 'MemAvailable: 47683652 kB' 'Buffers: 12144 kB' 'Cached: 10859276 kB' 'SwapCached: 0 kB' 'Active: 8313112 kB' 'Inactive: 3449392 kB' 'Active(anon): 7893560 kB' 'Inactive(anon): 0 kB' 'Active(file): 419552 kB' 'Inactive(file): 3449392 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 894340 kB' 'Mapped: 153932 kB' 'Shmem: 7002476 kB' 'KReclaimable: 196480 kB' 'Slab: 590480 kB' 'SReclaimable: 196480 kB' 'SUnreclaim: 394000 kB' 'KernelStack: 19136 kB' 'PageTables: 8484 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487512 kB' 'Committed_AS: 9732192 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 208088 kB' 'VmallocChunk: 0 kB' 'Percpu: 62208 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 498644 kB' 'DirectMap2M: 12812288 kB' 'DirectMap1G: 55574528 kB' 00:04:17.635 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.635 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.635 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.635 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.635 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.635 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.635 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.635 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.635 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.635 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.635 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.635 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.635 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.635 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.635 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.635 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.635 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.635 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.635 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.635 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.635 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.635 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.635 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.635 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.635 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.635 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.635 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.635 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.635 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.636 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.636 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.636 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.636 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.636 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.636 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.636 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.636 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.636 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.636 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.636 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.636 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.636 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.636 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.636 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.636 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.636 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.636 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.636 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.636 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.636 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.636 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.636 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.636 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.636 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.636 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.636 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.636 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.636 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.636 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.636 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.636 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.636 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.636 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.636 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.636 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.636 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.636 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.636 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.636 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.636 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.636 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.636 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.636 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.636 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.636 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.636 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.636 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.636 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.636 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.636 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.636 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.636 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.636 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.636 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.636 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.636 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.636 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.636 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.636 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.636 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.636 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.636 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.636 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.636 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.636 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.636 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.636 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.636 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.636 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.636 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.636 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.636 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.636 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.636 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.636 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.636 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.636 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.636 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.636 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.636 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.636 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.636 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.636 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.636 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.636 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.636 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.636 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.636 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.636 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.636 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.636 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.636 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.636 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.636 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.636 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.636 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.636 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.636 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.636 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.636 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.636 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.636 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.636 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.636 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.636 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.636 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.636 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.636 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.636 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.636 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.636 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.636 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.636 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.636 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.636 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.637 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.637 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.637 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.637 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.637 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.637 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.637 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.637 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.637 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.637 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.637 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.637 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.637 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.637 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.637 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.637 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.637 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.637 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.637 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.637 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.637 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.637 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.637 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.637 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.637 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.637 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.637 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.637 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.637 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.637 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.637 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.637 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.637 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.637 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.637 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.637 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.637 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.637 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.637 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.637 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.637 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.637 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.637 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.637 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.637 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.637 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.637 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.637 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.637 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.637 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.637 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.637 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.637 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.637 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.637 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.637 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.637 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.637 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.637 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.637 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.637 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:17.637 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:17.637 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:17.637 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:17.637 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:17.637 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:17.637 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:17.637 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:17.637 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:17.637 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:17.637 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:17.637 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:17.637 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:17.637 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60294968 kB' 'MemFree: 44213404 kB' 'MemAvailable: 47683652 kB' 'Buffers: 12144 kB' 'Cached: 10859296 kB' 'SwapCached: 0 kB' 'Active: 8313124 kB' 'Inactive: 3449392 kB' 'Active(anon): 7893572 kB' 'Inactive(anon): 0 kB' 'Active(file): 419552 kB' 'Inactive(file): 3449392 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 894336 kB' 'Mapped: 153932 kB' 'Shmem: 7002496 kB' 'KReclaimable: 196480 kB' 'Slab: 590480 kB' 'SReclaimable: 196480 kB' 'SUnreclaim: 394000 kB' 'KernelStack: 19136 kB' 'PageTables: 8484 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487512 kB' 'Committed_AS: 9732212 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 208088 kB' 'VmallocChunk: 0 kB' 'Percpu: 62208 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 498644 kB' 'DirectMap2M: 12812288 kB' 'DirectMap1G: 55574528 kB' 00:04:17.637 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.637 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.637 09:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.637 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.637 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.637 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.637 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.637 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.637 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.637 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.637 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.637 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.637 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.637 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.637 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.637 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.637 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.637 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.637 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.637 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.637 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.637 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.637 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.637 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.637 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.637 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.637 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.637 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.637 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.637 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.637 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.637 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.637 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.637 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.638 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.638 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.638 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.638 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.638 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.638 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.638 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.638 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.638 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.638 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.638 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.638 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.638 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.638 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.638 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.638 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.638 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.638 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.638 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.638 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.638 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.638 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.638 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.638 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.638 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.638 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.638 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.638 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.638 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.638 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.638 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.638 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.638 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.638 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.638 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.638 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.638 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.638 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.638 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.638 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.638 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.638 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.638 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.638 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.638 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.638 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.638 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.638 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.638 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.638 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.638 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.638 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.638 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.638 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.638 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.638 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.638 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.638 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.638 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.638 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.638 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.638 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.638 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.638 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.638 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.638 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.638 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.638 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.638 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.638 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.638 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.638 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.638 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.638 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.638 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.638 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.638 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.638 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.638 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.638 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.638 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.638 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.638 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.638 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.638 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.638 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.638 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.638 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.638 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.638 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.638 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.638 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.638 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.638 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.638 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.638 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.638 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.638 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.639 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.639 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.639 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.639 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.639 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.639 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.639 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.639 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.639 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.639 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.639 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.639 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.639 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.639 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.639 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.639 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.639 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.639 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.639 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.639 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.639 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.639 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.639 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.639 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.639 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.639 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.639 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.639 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.639 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.639 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.639 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.639 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.639 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.639 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.639 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.639 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.639 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.639 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.639 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.639 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.639 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.639 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.639 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.639 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.639 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.639 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.639 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.639 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.639 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.639 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.639 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.639 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.639 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.639 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.639 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.639 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.639 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.639 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.639 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.639 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.639 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.639 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.639 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.639 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.639 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.639 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.639 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.639 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.639 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.639 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.639 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.639 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:17.639 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:17.639 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:17.639 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:17.639 nr_hugepages=1024 00:04:17.639 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:17.639 resv_hugepages=0 00:04:17.639 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:17.639 surplus_hugepages=0 00:04:17.639 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:17.639 anon_hugepages=0 00:04:17.639 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:17.639 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:17.639 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:17.639 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:17.639 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:17.639 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:17.639 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:17.639 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:17.639 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:17.639 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:17.639 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:17.639 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:17.639 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.639 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.639 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60294968 kB' 'MemFree: 44217752 kB' 'MemAvailable: 47688000 kB' 'Buffers: 12144 kB' 'Cached: 10859316 kB' 'SwapCached: 0 kB' 'Active: 8314152 kB' 'Inactive: 3449392 kB' 'Active(anon): 7894600 kB' 'Inactive(anon): 0 kB' 'Active(file): 419552 kB' 'Inactive(file): 3449392 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 895728 kB' 'Mapped: 153932 kB' 'Shmem: 7002516 kB' 'KReclaimable: 196480 kB' 'Slab: 590480 kB' 'SReclaimable: 196480 kB' 'SUnreclaim: 394000 kB' 'KernelStack: 19120 kB' 'PageTables: 8436 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487512 kB' 'Committed_AS: 9732236 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 208088 kB' 'VmallocChunk: 0 kB' 'Percpu: 62208 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 498644 kB' 'DirectMap2M: 12812288 kB' 'DirectMap1G: 55574528 kB' 00:04:17.639 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.639 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.639 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.639 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.639 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.639 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.639 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.639 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.639 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.639 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.639 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.639 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.640 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.640 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.640 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.640 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.640 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.640 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.640 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.640 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.640 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.640 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.640 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.640 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.640 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.640 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.640 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.640 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.640 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.640 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.640 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.640 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.640 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.640 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.640 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.640 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.640 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.640 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.640 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.640 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.640 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.640 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.640 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.640 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.640 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.640 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.640 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.640 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.640 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.640 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.640 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.640 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.640 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.640 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.640 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.640 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.640 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.640 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.640 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.640 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.640 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.640 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.640 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.640 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.640 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.640 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.640 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.640 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.640 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.640 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.640 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.640 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.640 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.640 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.640 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.640 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.640 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.640 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.640 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.640 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.640 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.640 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.640 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.640 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.640 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.640 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.640 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.640 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.640 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.640 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.640 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.640 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.640 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.640 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.640 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.640 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.640 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.640 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.640 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.640 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.640 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.640 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.640 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.640 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.640 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.640 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.640 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.640 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.640 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.640 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.640 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.640 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.640 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.640 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.640 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.640 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.640 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.640 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.640 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.640 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.640 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.640 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.640 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.640 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.640 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.640 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.640 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.640 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.640 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.640 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.640 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.640 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.640 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.640 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.641 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.641 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.641 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.641 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.641 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.641 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.641 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.641 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.641 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.641 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.641 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.641 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.641 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.641 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.641 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.641 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.641 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.641 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.641 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.641 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.641 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.641 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.641 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.641 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.641 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.641 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.641 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.641 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.641 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.641 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.641 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.641 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.641 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.641 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.641 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.641 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.641 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.641 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.641 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.641 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.641 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.641 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.641 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.641 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.641 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.641 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.641 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.641 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.641 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.641 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.641 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.641 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.641 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.641 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.641 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.641 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.641 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.641 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.641 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.641 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:17.641 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:17.641 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:17.641 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:17.641 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:17.641 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:17.641 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:17.641 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:17.641 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:17.641 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:17.641 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:17.641 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:17.641 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:17.641 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:17.641 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:17.641 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:04:17.641 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:17.641 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:17.641 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:17.641 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:17.641 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:17.641 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:17.641 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:17.641 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.641 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.641 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32588596 kB' 'MemFree: 27633956 kB' 'MemUsed: 4954640 kB' 'SwapCached: 0 kB' 'Active: 2529180 kB' 'Inactive: 63960 kB' 'Active(anon): 2342848 kB' 'Inactive(anon): 0 kB' 'Active(file): 186332 kB' 'Inactive(file): 63960 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1954696 kB' 'Mapped: 86140 kB' 'AnonPages: 641888 kB' 'Shmem: 1704404 kB' 'KernelStack: 10216 kB' 'PageTables: 4004 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 89376 kB' 'Slab: 307672 kB' 'SReclaimable: 89376 kB' 'SUnreclaim: 218296 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:17.641 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.641 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.641 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.641 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.641 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.641 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.641 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.641 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.641 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.641 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.641 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.641 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.641 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.641 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.641 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.641 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.641 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.641 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.641 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.641 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.641 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.641 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.641 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.641 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.641 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.641 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.641 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.642 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.642 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.642 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.642 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.642 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.642 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.642 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.642 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.642 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.642 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.642 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.642 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.642 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.642 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.642 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.642 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.642 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.642 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.642 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.642 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.642 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.642 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.642 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.642 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.642 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.642 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.642 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.642 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.642 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.642 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.642 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.642 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.642 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.642 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.642 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.642 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.642 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.642 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.642 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.642 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.642 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.642 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.642 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.642 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.642 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.642 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.642 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.642 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.642 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.642 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.642 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.642 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.642 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.642 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.642 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.642 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.642 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.642 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.642 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.642 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.642 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.642 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.642 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.642 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.642 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.642 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.642 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.642 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.642 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.642 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.642 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.642 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.642 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.642 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.642 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.642 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.642 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.642 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.642 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.642 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.642 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.642 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.642 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.642 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.642 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.642 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.642 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.642 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.642 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.642 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.642 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.642 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.642 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.642 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.642 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.642 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.642 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.642 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.642 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.642 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.642 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.642 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.642 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.642 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.642 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.642 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.642 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.642 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.642 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.642 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.642 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.642 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.642 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.642 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.642 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.642 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.642 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.642 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.642 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:17.642 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:17.642 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:17.643 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:17.643 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:17.643 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:17.643 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:17.643 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:04:17.643 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:17.643 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:17.643 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:17.643 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:17.643 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:17.643 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:17.643 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:17.643 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27706372 kB' 'MemFree: 16583796 kB' 'MemUsed: 11122576 kB' 'SwapCached: 0 kB' 'Active: 5786080 kB' 'Inactive: 3385432 kB' 'Active(anon): 5552860 kB' 'Inactive(anon): 0 kB' 'Active(file): 233220 kB' 'Inactive(file): 3385432 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8916764 kB' 'Mapped: 67792 kB' 'AnonPages: 255048 kB' 'Shmem: 5298112 kB' 'KernelStack: 8888 kB' 'PageTables: 4396 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 107104 kB' 'Slab: 282808 kB' 'SReclaimable: 107104 kB' 'SUnreclaim: 175704 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:17.643 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.643 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.643 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.643 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.643 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.643 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.643 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.643 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.643 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.643 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.643 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.643 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.643 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.643 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.643 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.643 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.643 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.643 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.643 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.643 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.643 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.643 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.643 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.643 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.643 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.643 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.643 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.643 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.643 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.643 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.643 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.643 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.643 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.643 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.643 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.643 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.643 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.643 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.643 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.643 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.643 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.643 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.643 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.643 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.643 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.643 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.643 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.643 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.643 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.643 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.643 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.643 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.643 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.643 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.643 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.643 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.643 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.643 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.643 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.643 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.643 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.643 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.643 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.643 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.643 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.643 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.643 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.643 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.643 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.643 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.643 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.643 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.643 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.643 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.643 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.643 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.643 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.643 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.643 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.643 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.643 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.643 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.643 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.643 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.644 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.644 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.644 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.644 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.644 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.644 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.644 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.644 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.644 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.644 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.644 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.644 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.644 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.644 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.644 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.644 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.644 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.644 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.644 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.644 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.644 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.644 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.644 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.644 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.644 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.644 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.644 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.644 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.644 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.644 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.644 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.644 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.644 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.644 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.644 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.644 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.644 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.644 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.644 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.644 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.644 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.644 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.644 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.644 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.644 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.644 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.644 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.644 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.644 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.644 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.644 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.644 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.644 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.644 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.644 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.644 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.644 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.644 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.644 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.644 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.644 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.644 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.644 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.644 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:17.644 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:17.644 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:17.644 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:17.644 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:17.644 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:17.644 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:17.644 node0=512 expecting 512 00:04:17.644 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:17.644 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:17.644 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:17.644 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:17.644 node1=512 expecting 512 00:04:17.644 09:19:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:17.644 00:04:17.644 real 0m3.160s 00:04:17.644 user 0m1.208s 00:04:17.644 sys 0m1.932s 00:04:17.644 09:19:30 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:17.644 09:19:30 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:17.644 ************************************ 00:04:17.644 END TEST even_2G_alloc 00:04:17.644 ************************************ 00:04:17.644 09:19:30 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:17.644 09:19:30 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:17.644 09:19:30 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:17.644 09:19:30 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:17.644 ************************************ 00:04:17.644 START TEST odd_alloc 00:04:17.644 ************************************ 00:04:17.644 09:19:30 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1125 -- # odd_alloc 00:04:17.644 09:19:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:17.644 09:19:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:04:17.644 09:19:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:17.644 09:19:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:17.644 09:19:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:17.644 09:19:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:17.644 09:19:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:17.644 09:19:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:17.644 09:19:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:17.644 09:19:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:17.644 09:19:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:17.644 09:19:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:17.644 09:19:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:17.644 09:19:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:17.644 09:19:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:17.644 09:19:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:17.644 09:19:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:04:17.644 09:19:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:17.644 09:19:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:17.644 09:19:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:04:17.644 09:19:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:17.644 09:19:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:17.644 09:19:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:17.644 09:19:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:17.644 09:19:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:17.644 09:19:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:04:17.644 09:19:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:17.644 09:19:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:04:20.938 0000:dd:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:20.938 0000:df:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:20.938 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:20.938 0000:de:00.0 (8086 0953): Already using the vfio-pci driver 00:04:20.938 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:20.938 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:20.938 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:20.938 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:20.938 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:20.938 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:20.938 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:20.938 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:20.938 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:20.938 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:20.938 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:20.938 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:20.938 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:20.938 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:20.938 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:20.938 0000:dc:00.0 (8086 0953): Already using the vfio-pci driver 00:04:20.938 09:19:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:20.938 09:19:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:04:20.938 09:19:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:20.938 09:19:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:20.938 09:19:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:20.938 09:19:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:20.938 09:19:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:20.938 09:19:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:20.938 09:19:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:20.938 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:20.938 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:20.938 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:20.938 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:20.938 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:20.938 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:20.938 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:20.938 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:20.938 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:20.938 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.938 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.938 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60294968 kB' 'MemFree: 44201220 kB' 'MemAvailable: 47671440 kB' 'Buffers: 12144 kB' 'Cached: 10859440 kB' 'SwapCached: 0 kB' 'Active: 8313624 kB' 'Inactive: 3449392 kB' 'Active(anon): 7894072 kB' 'Inactive(anon): 0 kB' 'Active(file): 419552 kB' 'Inactive(file): 3449392 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 893768 kB' 'Mapped: 154044 kB' 'Shmem: 7002640 kB' 'KReclaimable: 196424 kB' 'Slab: 590580 kB' 'SReclaimable: 196424 kB' 'SUnreclaim: 394156 kB' 'KernelStack: 19168 kB' 'PageTables: 8612 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486488 kB' 'Committed_AS: 9732732 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 208232 kB' 'VmallocChunk: 0 kB' 'Percpu: 62208 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 498644 kB' 'DirectMap2M: 12812288 kB' 'DirectMap1G: 55574528 kB' 00:04:20.938 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.938 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.938 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.938 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.938 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.938 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.938 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.938 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.938 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.938 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.938 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.938 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.938 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.938 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.938 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.938 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.938 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.938 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.938 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.938 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.938 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.938 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.938 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.938 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.938 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.938 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.938 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.938 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.938 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.938 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.938 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.938 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.938 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.938 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.938 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.938 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.938 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.938 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.938 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.938 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.938 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.938 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.938 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.938 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.938 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.938 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.939 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.939 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.939 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.939 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.939 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.939 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.939 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.939 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.939 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.939 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.939 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.939 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.939 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.939 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.939 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.939 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.939 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.939 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.939 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.939 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.939 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.939 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.939 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.939 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.939 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.939 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.939 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.939 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.939 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.939 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.939 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.939 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.939 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.939 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.939 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.939 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.939 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.939 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.939 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.939 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.939 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.939 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.939 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.939 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.939 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.939 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.939 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.939 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.939 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.939 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.939 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.939 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.939 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.939 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.939 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.939 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.939 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.939 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.939 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.939 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.939 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.939 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.939 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.939 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.939 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.939 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.939 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.939 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.939 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.939 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.939 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.939 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.939 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.939 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.939 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.939 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.939 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.939 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.939 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.939 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.939 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.939 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.939 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.939 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.939 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.939 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.939 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.939 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.939 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.939 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.939 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.939 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.939 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.939 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.939 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.939 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.939 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.939 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.939 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.939 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.939 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.939 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.939 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.939 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.939 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.939 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.939 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.939 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.939 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.939 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.939 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.939 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.939 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.939 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.939 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.939 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:20.939 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:20.939 09:19:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:20.939 09:19:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:20.939 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:20.939 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:20.939 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:20.939 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:20.940 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:20.940 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:20.940 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:20.940 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:20.940 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:20.940 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.940 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.940 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60294968 kB' 'MemFree: 44202492 kB' 'MemAvailable: 47672712 kB' 'Buffers: 12144 kB' 'Cached: 10859444 kB' 'SwapCached: 0 kB' 'Active: 8313340 kB' 'Inactive: 3449392 kB' 'Active(anon): 7893788 kB' 'Inactive(anon): 0 kB' 'Active(file): 419552 kB' 'Inactive(file): 3449392 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 893964 kB' 'Mapped: 154044 kB' 'Shmem: 7002644 kB' 'KReclaimable: 196424 kB' 'Slab: 590572 kB' 'SReclaimable: 196424 kB' 'SUnreclaim: 394148 kB' 'KernelStack: 19136 kB' 'PageTables: 8492 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486488 kB' 'Committed_AS: 9732748 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 208184 kB' 'VmallocChunk: 0 kB' 'Percpu: 62208 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 498644 kB' 'DirectMap2M: 12812288 kB' 'DirectMap1G: 55574528 kB' 00:04:20.940 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.940 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.940 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.940 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.940 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.940 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.940 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.940 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.940 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.940 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.940 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.940 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.940 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.940 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.940 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.940 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.940 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.940 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.940 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.940 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.940 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.940 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.940 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.940 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.940 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.940 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.940 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.940 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.940 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.940 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.940 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.940 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.940 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.940 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.940 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.940 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.940 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.940 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.940 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.940 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.940 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.940 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.940 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.940 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.940 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.940 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.940 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.940 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.940 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.940 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.940 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.940 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.940 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.940 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.940 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.940 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.940 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.940 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.940 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.940 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.940 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.940 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.940 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.940 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.940 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.940 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.940 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.940 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.940 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.940 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.940 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.940 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.940 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.940 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.940 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.940 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.940 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.940 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.940 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.940 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.940 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.940 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.940 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.940 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.940 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.940 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.940 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.940 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.940 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.940 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.940 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.940 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.940 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.940 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.940 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.940 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.940 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.941 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.941 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.941 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.941 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.941 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.941 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.941 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.941 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.941 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.941 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.941 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.941 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.941 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.941 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.941 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.941 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.941 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.941 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.941 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.941 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.941 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.941 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.941 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.941 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.941 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.941 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.941 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.941 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.941 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.941 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.941 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.941 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.941 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.941 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.941 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.941 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.941 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.941 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.941 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.941 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.941 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.941 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.941 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.941 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.941 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.941 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.941 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.941 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.941 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.941 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.941 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.941 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.941 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.941 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.941 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.941 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.941 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.941 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.941 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.941 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.941 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.941 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.941 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.941 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.941 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.941 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.941 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.941 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.941 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.941 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.941 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.941 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.941 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.941 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.941 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.941 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.941 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.941 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.941 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.941 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.941 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.941 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.941 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.941 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.941 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.941 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.941 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.941 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.941 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.941 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.941 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.941 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.941 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.941 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.941 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.941 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.941 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.941 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.941 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.941 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.941 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.941 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.941 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.941 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.941 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.941 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.941 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.941 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.941 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:20.941 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:20.941 09:19:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:20.941 09:19:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:20.941 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:20.942 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:20.942 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:20.942 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:20.942 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:20.942 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:20.942 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:20.942 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:20.942 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:20.942 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.942 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.942 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60294968 kB' 'MemFree: 44203372 kB' 'MemAvailable: 47673592 kB' 'Buffers: 12144 kB' 'Cached: 10859460 kB' 'SwapCached: 0 kB' 'Active: 8312880 kB' 'Inactive: 3449392 kB' 'Active(anon): 7893328 kB' 'Inactive(anon): 0 kB' 'Active(file): 419552 kB' 'Inactive(file): 3449392 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 893944 kB' 'Mapped: 153968 kB' 'Shmem: 7002660 kB' 'KReclaimable: 196424 kB' 'Slab: 590548 kB' 'SReclaimable: 196424 kB' 'SUnreclaim: 394124 kB' 'KernelStack: 19136 kB' 'PageTables: 8484 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486488 kB' 'Committed_AS: 9732768 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 208184 kB' 'VmallocChunk: 0 kB' 'Percpu: 62208 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 498644 kB' 'DirectMap2M: 12812288 kB' 'DirectMap1G: 55574528 kB' 00:04:20.942 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.942 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.942 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.942 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.942 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.942 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.942 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.942 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.942 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.942 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.942 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.942 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.942 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.942 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.942 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.942 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.942 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.942 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.942 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.942 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.942 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.942 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.942 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.942 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.942 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.942 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.942 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.942 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.942 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.942 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.942 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.942 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.942 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.942 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.942 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.942 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.942 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.942 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.942 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.942 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.942 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.942 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.942 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.942 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.942 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.942 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.942 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.942 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.942 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.942 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.942 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.942 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.942 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.942 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.942 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.942 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.942 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.942 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.942 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.942 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.942 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.942 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.942 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.942 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.942 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.942 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.942 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.942 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.942 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.942 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.942 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.942 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.942 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.942 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.942 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.942 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.942 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.942 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.942 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.942 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.942 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.942 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.942 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.942 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.942 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.942 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.942 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.942 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.942 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.942 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.942 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.942 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.943 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.943 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.943 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.943 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.943 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.943 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.943 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.943 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.943 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.943 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.943 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.943 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.943 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.943 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.943 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.943 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.943 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.943 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.943 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.943 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.943 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.943 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.943 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.943 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.943 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.943 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.943 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.943 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.943 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.943 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.943 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.943 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.943 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.943 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.943 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.943 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.943 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.943 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.943 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.943 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.943 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.943 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.943 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.943 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.943 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.943 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.943 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.943 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.943 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.943 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.943 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.943 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.943 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.943 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.943 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.943 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.943 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.943 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.943 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.943 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.943 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.943 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.943 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.943 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.943 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.943 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.943 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.943 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.943 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.943 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.943 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.943 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.943 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.943 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.943 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.943 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.943 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.943 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.943 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.943 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.943 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.943 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.943 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.943 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.943 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.943 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.943 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.943 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.943 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.943 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.943 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.943 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.943 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.943 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.943 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.943 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.943 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.943 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.943 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.943 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.943 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.943 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.943 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.943 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.943 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.943 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.944 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.944 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.944 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.944 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:20.944 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:20.944 09:19:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:20.944 09:19:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:20.944 nr_hugepages=1025 00:04:20.944 09:19:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:20.944 resv_hugepages=0 00:04:20.944 09:19:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:20.944 surplus_hugepages=0 00:04:20.944 09:19:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:20.944 anon_hugepages=0 00:04:20.944 09:19:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:20.944 09:19:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:20.944 09:19:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:20.944 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:20.944 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:20.944 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:20.944 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:20.944 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:20.944 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:20.944 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:20.944 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:20.944 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:20.944 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.944 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.944 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60294968 kB' 'MemFree: 44203372 kB' 'MemAvailable: 47673592 kB' 'Buffers: 12144 kB' 'Cached: 10859460 kB' 'SwapCached: 0 kB' 'Active: 8312880 kB' 'Inactive: 3449392 kB' 'Active(anon): 7893328 kB' 'Inactive(anon): 0 kB' 'Active(file): 419552 kB' 'Inactive(file): 3449392 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 893944 kB' 'Mapped: 153968 kB' 'Shmem: 7002660 kB' 'KReclaimable: 196424 kB' 'Slab: 590548 kB' 'SReclaimable: 196424 kB' 'SUnreclaim: 394124 kB' 'KernelStack: 19136 kB' 'PageTables: 8484 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486488 kB' 'Committed_AS: 9732788 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 208184 kB' 'VmallocChunk: 0 kB' 'Percpu: 62208 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 498644 kB' 'DirectMap2M: 12812288 kB' 'DirectMap1G: 55574528 kB' 00:04:20.944 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.944 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.944 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.944 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.944 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.944 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.944 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.944 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.944 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.944 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.944 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.944 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.944 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.944 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.944 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.944 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.944 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.944 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.944 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.944 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.944 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.944 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.944 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.944 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.944 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.944 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.944 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.944 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.944 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.944 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.944 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.944 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.944 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.944 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.944 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.944 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.944 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.944 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.944 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.944 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.944 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.944 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.944 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.944 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.944 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.944 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.944 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.944 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.944 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.944 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.944 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.944 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.944 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.944 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.944 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.944 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.944 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.944 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.944 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.944 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.944 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.944 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.944 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.944 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.944 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.944 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.944 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.944 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.944 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.945 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.945 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.945 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.945 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.945 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.945 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.945 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.945 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.945 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.945 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.945 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.945 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.945 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.945 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.945 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.945 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.945 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.945 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.945 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.945 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.945 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.945 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.945 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.945 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.945 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.945 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.945 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.945 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.945 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.945 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.945 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.945 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.945 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.945 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.945 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.945 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.945 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.945 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.945 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.945 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.945 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.945 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.945 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.945 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.945 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.945 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.945 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.945 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.945 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.945 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.945 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.945 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.945 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.945 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.945 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.945 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.945 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.945 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.945 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.945 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.945 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.945 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.945 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.945 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.945 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.945 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.945 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.945 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.945 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.945 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.945 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.945 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.945 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.945 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.945 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.945 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.945 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.945 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.945 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.945 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.945 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.945 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.945 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.945 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.945 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.945 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.945 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.945 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.945 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.945 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.945 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.945 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.945 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.945 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.945 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.945 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.945 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.945 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.945 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.945 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.945 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.945 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.945 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.945 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.945 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.946 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.946 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.946 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.946 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.946 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.946 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.946 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.946 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.946 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.946 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.946 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.946 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.946 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.946 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.946 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.946 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.946 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.946 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.946 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.946 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:04:20.946 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:20.946 09:19:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:20.946 09:19:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:20.946 09:19:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:04:20.946 09:19:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:20.946 09:19:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:20.946 09:19:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:20.946 09:19:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:04:20.946 09:19:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:20.946 09:19:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:20.946 09:19:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:20.946 09:19:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:20.946 09:19:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:20.946 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:20.946 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:04:20.946 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:20.946 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:20.946 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:20.946 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:20.946 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:20.946 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:20.946 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:20.946 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.946 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.946 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32588596 kB' 'MemFree: 27627940 kB' 'MemUsed: 4960656 kB' 'SwapCached: 0 kB' 'Active: 2527220 kB' 'Inactive: 63960 kB' 'Active(anon): 2340888 kB' 'Inactive(anon): 0 kB' 'Active(file): 186332 kB' 'Inactive(file): 63960 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1954744 kB' 'Mapped: 86184 kB' 'AnonPages: 639656 kB' 'Shmem: 1704452 kB' 'KernelStack: 10248 kB' 'PageTables: 4048 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 89320 kB' 'Slab: 307716 kB' 'SReclaimable: 89320 kB' 'SUnreclaim: 218396 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:20.946 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.946 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.946 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.946 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.946 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.946 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.946 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.946 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.946 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.946 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.946 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.946 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.946 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.946 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.946 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.946 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.946 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.946 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.946 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.946 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.946 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.946 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.946 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.946 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.946 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.946 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.946 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.946 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.946 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.946 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.946 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.946 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.946 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.946 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.946 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.946 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.946 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.946 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.946 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.946 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.946 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.946 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.946 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.946 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.946 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.946 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.946 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.946 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.946 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.946 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.946 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.946 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.946 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.946 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.946 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.946 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.946 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.946 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.946 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.946 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.947 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.947 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.947 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.947 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.947 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.947 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.947 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.947 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.947 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.947 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.947 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.947 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.947 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.947 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.947 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.947 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.947 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.947 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.947 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.947 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.947 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.947 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.947 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.947 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.947 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.947 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.947 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.947 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.947 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.947 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.947 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.947 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.947 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.947 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.947 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.947 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.947 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.947 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.947 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.947 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.947 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.947 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.947 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.947 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.947 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.947 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.947 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.947 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.947 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.947 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.947 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.947 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.947 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.947 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.947 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.947 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.947 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.947 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.947 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.947 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.947 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.947 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.947 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.947 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.947 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.947 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.947 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.947 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.947 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.947 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.947 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.947 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.947 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.947 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.947 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.947 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.947 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.947 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.947 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.947 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.947 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.947 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.947 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.947 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.947 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.947 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:20.947 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:20.947 09:19:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:20.947 09:19:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:20.947 09:19:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:20.947 09:19:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:20.947 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:20.947 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:04:20.947 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:20.947 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:20.947 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:20.947 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:20.947 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:20.947 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:20.947 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:20.947 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.947 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.947 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27706372 kB' 'MemFree: 16576224 kB' 'MemUsed: 11130148 kB' 'SwapCached: 0 kB' 'Active: 5785868 kB' 'Inactive: 3385432 kB' 'Active(anon): 5552648 kB' 'Inactive(anon): 0 kB' 'Active(file): 233220 kB' 'Inactive(file): 3385432 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8916864 kB' 'Mapped: 67784 kB' 'AnonPages: 254488 kB' 'Shmem: 5298212 kB' 'KernelStack: 8904 kB' 'PageTables: 4480 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 107104 kB' 'Slab: 282832 kB' 'SReclaimable: 107104 kB' 'SUnreclaim: 175728 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:04:20.947 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.947 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.947 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.947 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.947 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.947 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.947 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.947 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.947 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.947 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.948 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.948 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.948 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.948 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.948 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.948 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.948 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.948 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.948 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.948 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.948 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.948 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.948 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.948 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.948 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.948 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.948 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.948 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.948 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.948 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.948 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.948 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.948 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.948 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.948 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.948 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.948 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.948 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.948 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.948 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.948 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.948 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.948 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.948 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.948 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.948 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.948 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.948 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.948 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.948 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.948 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.948 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.948 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.948 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.948 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.948 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.948 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.948 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.948 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.948 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.948 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.948 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.948 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.948 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.948 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.948 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.948 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.948 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.948 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.948 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.948 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.948 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.948 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.948 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.948 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.948 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.948 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.948 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.948 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.948 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.948 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.948 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.948 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.948 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.948 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.948 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.948 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.948 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.948 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.948 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.948 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.948 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.948 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.948 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.948 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.948 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.948 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.948 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.948 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.948 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.948 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.948 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.948 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.948 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.948 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.948 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.948 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.949 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.949 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.949 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.949 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.949 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.949 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.949 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.949 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.949 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.949 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.949 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.949 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.949 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.949 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.949 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.949 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.949 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.949 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.949 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.949 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.949 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.949 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.949 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.949 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.949 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.949 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.949 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.949 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.949 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.949 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.949 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.949 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.949 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.949 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.949 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.949 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.949 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.949 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.949 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:20.949 09:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:20.949 09:19:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:20.949 09:19:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:20.949 09:19:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:20.949 09:19:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:20.949 09:19:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:04:20.949 node0=512 expecting 513 00:04:20.949 09:19:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:20.949 09:19:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:20.949 09:19:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:20.949 09:19:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:04:20.949 node1=513 expecting 512 00:04:20.949 09:19:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:04:20.949 00:04:20.949 real 0m3.258s 00:04:20.949 user 0m1.244s 00:04:20.949 sys 0m2.100s 00:04:20.949 09:19:33 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:20.949 09:19:33 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:20.949 ************************************ 00:04:20.949 END TEST odd_alloc 00:04:20.949 ************************************ 00:04:20.949 09:19:33 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:20.949 09:19:33 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:20.949 09:19:33 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:20.949 09:19:33 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:20.949 ************************************ 00:04:20.949 START TEST custom_alloc 00:04:20.949 ************************************ 00:04:20.949 09:19:33 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1125 -- # custom_alloc 00:04:20.949 09:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:04:20.949 09:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:04:20.949 09:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:20.949 09:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:20.949 09:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:20.949 09:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:20.949 09:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:20.949 09:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:20.949 09:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:20.949 09:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:20.949 09:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:20.949 09:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:20.949 09:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:20.949 09:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:20.949 09:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:20.949 09:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:20.949 09:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:20.949 09:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:20.949 09:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:20.949 09:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:20.949 09:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:20.949 09:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:04:20.949 09:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:20.949 09:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:20.949 09:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:20.949 09:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:20.949 09:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:20.949 09:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:20.949 09:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:20.949 09:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:04:20.949 09:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:04:20.949 09:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:20.949 09:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:20.949 09:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:20.949 09:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:20.949 09:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:20.949 09:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:20.949 09:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:20.949 09:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:20.949 09:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:20.949 09:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:20.949 09:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:20.949 09:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:20.949 09:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:20.949 09:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:20.949 09:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:20.949 09:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:20.949 09:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:04:20.949 09:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:20.949 09:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:20.949 09:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:20.949 09:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:20.949 09:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:20.950 09:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:20.950 09:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:20.950 09:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:20.950 09:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:20.950 09:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:20.950 09:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:20.950 09:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:20.950 09:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:20.950 09:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:20.950 09:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:04:20.950 09:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:20.950 09:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:20.950 09:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:20.950 09:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:04:20.950 09:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:20.950 09:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:04:20.950 09:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:04:20.950 09:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:20.950 09:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:04:24.246 0000:dd:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:24.246 0000:df:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:24.246 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:24.246 0000:de:00.0 (8086 0953): Already using the vfio-pci driver 00:04:24.246 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:24.246 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:24.246 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:24.246 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:24.246 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:24.246 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:24.246 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:24.246 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:24.246 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:24.246 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:24.246 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:24.246 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:24.246 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:24.246 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:24.246 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:24.246 0000:dc:00.0 (8086 0953): Already using the vfio-pci driver 00:04:24.246 09:19:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:04:24.246 09:19:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:24.246 09:19:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:04:24.246 09:19:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:24.246 09:19:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:24.246 09:19:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:24.246 09:19:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:24.246 09:19:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:24.247 09:19:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:24.247 09:19:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:24.247 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:24.247 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:24.247 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:24.247 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:24.247 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:24.247 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:24.247 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:24.247 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:24.247 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:24.247 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.247 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.247 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60294968 kB' 'MemFree: 43143492 kB' 'MemAvailable: 46613712 kB' 'Buffers: 12144 kB' 'Cached: 10859608 kB' 'SwapCached: 0 kB' 'Active: 8314236 kB' 'Inactive: 3449392 kB' 'Active(anon): 7894684 kB' 'Inactive(anon): 0 kB' 'Active(file): 419552 kB' 'Inactive(file): 3449392 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 894572 kB' 'Mapped: 154040 kB' 'Shmem: 7002808 kB' 'KReclaimable: 196424 kB' 'Slab: 590644 kB' 'SReclaimable: 196424 kB' 'SUnreclaim: 394220 kB' 'KernelStack: 19120 kB' 'PageTables: 8436 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963224 kB' 'Committed_AS: 9733456 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 208088 kB' 'VmallocChunk: 0 kB' 'Percpu: 62208 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 498644 kB' 'DirectMap2M: 12812288 kB' 'DirectMap1G: 55574528 kB' 00:04:24.247 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.247 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.247 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.247 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.247 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.247 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.247 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.247 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.247 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.247 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.247 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.247 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.247 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.247 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.247 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.247 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.247 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.247 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.247 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.247 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.247 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.247 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.247 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.247 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.247 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.247 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.247 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.247 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.247 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.247 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.247 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.247 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.247 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.247 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.247 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.247 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.247 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.247 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.247 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.247 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.247 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.247 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.247 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.247 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.247 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.247 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.247 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.247 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.247 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.247 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.247 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.247 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.247 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.247 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.247 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.247 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.247 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.247 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.247 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.247 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.247 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.247 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.247 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.247 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.247 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.247 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.247 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.247 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.247 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.247 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.247 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.247 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.247 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.247 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.247 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.247 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.247 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.247 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.247 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.247 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.247 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.247 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.247 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.247 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.247 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.247 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.247 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.247 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.247 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.247 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.247 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.247 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.247 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.248 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.248 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.248 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.248 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.248 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.248 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.248 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.248 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.248 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.248 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.248 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.248 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.248 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.248 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.248 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.248 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.248 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.248 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.248 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.248 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.248 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.248 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.248 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.248 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.248 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.248 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.248 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.248 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.248 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.248 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.248 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.248 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.248 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.248 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.248 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.248 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.248 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.248 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.248 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.248 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.248 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.248 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.248 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.248 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.248 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.248 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.248 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.248 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.248 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.248 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.248 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.248 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.248 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.248 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.248 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.248 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.248 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.248 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.248 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.248 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.248 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.248 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.248 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.248 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.248 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.248 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.248 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.248 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.248 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:24.248 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:24.248 09:19:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:24.248 09:19:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:24.248 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:24.248 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:24.248 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:24.248 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:24.248 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:24.248 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:24.248 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:24.248 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:24.248 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:24.248 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.248 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.248 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60294968 kB' 'MemFree: 43146640 kB' 'MemAvailable: 46616860 kB' 'Buffers: 12144 kB' 'Cached: 10859612 kB' 'SwapCached: 0 kB' 'Active: 8314484 kB' 'Inactive: 3449392 kB' 'Active(anon): 7894932 kB' 'Inactive(anon): 0 kB' 'Active(file): 419552 kB' 'Inactive(file): 3449392 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 894928 kB' 'Mapped: 154116 kB' 'Shmem: 7002812 kB' 'KReclaimable: 196424 kB' 'Slab: 590724 kB' 'SReclaimable: 196424 kB' 'SUnreclaim: 394300 kB' 'KernelStack: 19104 kB' 'PageTables: 8408 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963224 kB' 'Committed_AS: 9733472 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 208072 kB' 'VmallocChunk: 0 kB' 'Percpu: 62208 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 498644 kB' 'DirectMap2M: 12812288 kB' 'DirectMap1G: 55574528 kB' 00:04:24.248 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.248 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.248 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.248 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.248 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.248 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.248 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.248 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.248 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.248 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.248 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.248 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.248 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.248 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.248 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.248 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.248 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.248 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.248 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.248 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.248 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.248 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.248 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.248 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.248 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.249 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.249 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.249 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.249 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.249 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.249 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.249 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.249 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.249 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.249 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.249 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.249 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.249 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.249 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.249 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.249 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.249 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.249 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.249 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.249 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.249 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.249 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.249 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.249 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.249 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.249 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.249 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.249 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.249 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.249 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.249 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.249 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.249 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.249 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.249 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.249 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.249 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.249 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.249 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.249 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.249 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.249 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.249 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.249 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.249 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.249 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.249 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.249 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.249 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.249 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.249 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.249 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.249 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.249 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.249 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.249 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.249 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.249 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.249 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.249 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.249 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.249 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.249 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.249 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.249 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.249 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.249 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.249 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.249 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.249 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.249 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.249 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.249 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.249 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.249 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.249 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.249 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.249 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.249 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.249 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.249 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.249 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.249 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.249 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.249 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.249 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.249 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.249 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.249 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.249 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.249 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.249 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.249 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.249 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.249 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.249 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.249 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.249 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.249 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.249 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.249 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.249 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.249 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.249 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.249 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.249 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.249 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.249 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.249 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.249 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.249 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.249 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.249 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.249 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.249 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.249 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.249 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.249 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.249 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.249 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.249 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.249 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.250 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.250 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.250 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.250 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.250 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.250 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.250 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.250 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.250 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.250 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.250 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.250 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.250 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.250 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.250 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.250 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.250 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.250 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.250 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.250 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.250 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.250 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.250 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.250 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.250 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.250 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.250 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.250 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.250 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.250 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.250 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.250 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.250 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.250 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.250 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.250 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.250 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.250 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.250 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.250 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.250 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.250 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.250 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.250 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.250 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.250 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.250 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.250 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.250 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.250 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.250 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.250 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.250 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.250 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.250 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.250 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.250 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.250 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.250 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:24.250 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:24.250 09:19:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:24.250 09:19:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:24.250 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:24.250 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:24.250 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:24.250 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:24.250 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:24.250 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:24.250 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:24.250 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:24.250 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:24.250 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.250 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.250 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60294968 kB' 'MemFree: 43147428 kB' 'MemAvailable: 46617648 kB' 'Buffers: 12144 kB' 'Cached: 10859612 kB' 'SwapCached: 0 kB' 'Active: 8313464 kB' 'Inactive: 3449392 kB' 'Active(anon): 7893912 kB' 'Inactive(anon): 0 kB' 'Active(file): 419552 kB' 'Inactive(file): 3449392 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 894376 kB' 'Mapped: 154024 kB' 'Shmem: 7002812 kB' 'KReclaimable: 196424 kB' 'Slab: 590684 kB' 'SReclaimable: 196424 kB' 'SUnreclaim: 394260 kB' 'KernelStack: 19136 kB' 'PageTables: 8476 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963224 kB' 'Committed_AS: 9733492 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 208072 kB' 'VmallocChunk: 0 kB' 'Percpu: 62208 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 498644 kB' 'DirectMap2M: 12812288 kB' 'DirectMap1G: 55574528 kB' 00:04:24.250 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.250 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.250 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.250 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.250 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.250 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.250 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.250 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.250 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.250 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.250 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.250 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.250 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.250 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.250 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.250 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.250 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.250 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.250 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.250 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.250 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.250 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.250 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.250 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.250 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.250 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.250 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.250 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.250 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.250 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.250 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.250 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.250 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.250 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.251 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.251 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.251 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.251 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.251 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.251 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.251 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.251 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.251 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.251 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.251 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.251 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.251 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.251 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.251 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.251 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.251 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.251 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.251 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.251 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.251 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.251 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.251 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.251 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.251 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.251 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.251 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.251 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.251 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.251 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.251 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.251 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.251 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.251 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.251 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.251 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.251 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.251 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.251 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.251 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.251 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.251 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.251 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.251 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.251 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.251 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.251 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.251 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.251 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.251 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.251 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.251 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.251 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.251 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.251 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.251 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.251 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.251 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.251 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.251 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.251 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.251 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.251 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.251 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.251 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.251 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.251 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.251 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.251 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.251 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.251 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.251 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.251 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.251 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.251 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.251 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.251 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.251 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.251 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.251 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.251 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.251 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.251 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.251 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.251 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.251 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.251 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.251 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.251 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.251 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.251 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.251 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.251 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.251 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.251 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.251 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.251 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.251 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.251 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.251 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.252 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.252 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.252 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.252 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.252 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.252 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.252 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.252 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.252 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.252 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.252 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.252 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.252 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.252 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.252 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.252 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.252 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.252 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.252 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.252 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.252 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.252 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.252 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.252 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.252 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.252 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.252 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.252 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.252 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.252 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.252 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.252 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.252 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.252 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.252 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.252 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.252 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.252 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.252 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.252 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.252 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.252 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.252 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.252 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.252 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.252 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.252 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.252 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.252 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.252 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.252 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.252 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.252 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.252 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.252 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.252 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.252 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.252 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.252 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.252 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.252 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.252 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.252 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.252 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.252 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.252 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.252 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.252 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:24.252 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:24.252 09:19:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:24.252 09:19:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:04:24.252 nr_hugepages=1536 00:04:24.252 09:19:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:24.252 resv_hugepages=0 00:04:24.252 09:19:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:24.252 surplus_hugepages=0 00:04:24.252 09:19:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:24.252 anon_hugepages=0 00:04:24.252 09:19:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:24.252 09:19:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:04:24.252 09:19:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:24.252 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:24.252 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:24.252 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:24.252 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:24.252 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:24.252 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:24.252 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:24.252 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:24.252 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:24.252 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.252 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.252 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60294968 kB' 'MemFree: 43148112 kB' 'MemAvailable: 46618332 kB' 'Buffers: 12144 kB' 'Cached: 10859652 kB' 'SwapCached: 0 kB' 'Active: 8313532 kB' 'Inactive: 3449392 kB' 'Active(anon): 7893980 kB' 'Inactive(anon): 0 kB' 'Active(file): 419552 kB' 'Inactive(file): 3449392 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 894380 kB' 'Mapped: 154024 kB' 'Shmem: 7002852 kB' 'KReclaimable: 196424 kB' 'Slab: 590684 kB' 'SReclaimable: 196424 kB' 'SUnreclaim: 394260 kB' 'KernelStack: 19136 kB' 'PageTables: 8476 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963224 kB' 'Committed_AS: 9733516 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 208072 kB' 'VmallocChunk: 0 kB' 'Percpu: 62208 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 498644 kB' 'DirectMap2M: 12812288 kB' 'DirectMap1G: 55574528 kB' 00:04:24.252 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.252 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.252 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.252 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.252 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.252 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.252 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.252 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.252 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.252 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.252 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.252 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.253 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.253 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.253 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.253 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.253 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.253 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.253 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.253 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.253 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.253 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.253 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.253 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.253 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.253 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.253 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.253 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.253 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.253 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.253 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.253 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.253 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.253 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.253 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.253 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.253 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.253 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.253 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.253 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.253 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.253 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.253 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.253 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.253 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.253 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.253 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.253 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.253 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.253 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.253 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.253 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.253 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.253 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.253 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.253 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.253 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.253 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.253 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.253 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.253 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.253 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.253 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.253 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.253 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.253 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.253 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.253 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.253 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.253 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.253 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.253 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.253 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.253 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.253 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.253 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.253 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.253 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.253 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.253 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.253 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.253 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.253 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.253 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.253 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.253 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.253 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.253 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.253 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.253 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.253 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.253 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.253 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.253 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.253 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.253 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.253 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.253 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.253 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.253 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.253 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.253 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.253 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.253 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.253 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.253 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.253 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.253 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.253 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.253 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.253 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.253 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.253 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.253 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.253 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.253 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.253 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.253 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.253 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.253 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.253 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.253 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.253 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.253 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.253 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.253 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.253 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.253 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.253 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.253 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.253 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.253 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.253 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.254 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.254 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.254 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.254 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.254 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.254 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.254 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.254 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.254 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.254 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.254 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.254 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.254 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.254 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.254 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.254 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.254 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.254 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.254 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.254 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.254 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.254 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.254 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.254 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.254 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.254 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.254 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.254 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.254 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.254 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.254 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.254 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.254 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.254 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.254 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.254 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.254 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.254 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.254 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.254 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.254 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.254 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.254 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.254 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.254 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.254 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.254 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.254 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.254 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.254 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.254 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.254 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.254 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.254 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.254 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.254 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.254 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.254 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.254 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.254 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.254 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:04:24.254 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:24.254 09:19:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:24.254 09:19:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:24.254 09:19:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:04:24.254 09:19:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:24.254 09:19:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:24.254 09:19:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:24.254 09:19:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:24.254 09:19:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:24.254 09:19:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:24.254 09:19:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:24.254 09:19:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:24.254 09:19:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:24.254 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:24.254 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:04:24.254 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:24.254 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:24.254 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:24.254 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:24.254 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:24.254 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:24.254 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:24.254 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.254 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.254 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32588596 kB' 'MemFree: 27608120 kB' 'MemUsed: 4980476 kB' 'SwapCached: 0 kB' 'Active: 2527100 kB' 'Inactive: 63960 kB' 'Active(anon): 2340768 kB' 'Inactive(anon): 0 kB' 'Active(file): 186332 kB' 'Inactive(file): 63960 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1954900 kB' 'Mapped: 86244 kB' 'AnonPages: 639400 kB' 'Shmem: 1704608 kB' 'KernelStack: 10248 kB' 'PageTables: 4040 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 89320 kB' 'Slab: 307604 kB' 'SReclaimable: 89320 kB' 'SUnreclaim: 218284 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:24.254 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.254 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.254 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.254 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.254 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.254 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.254 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.254 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.254 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.254 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.254 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.254 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.254 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.254 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.254 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.254 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.254 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.254 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.254 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.254 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.254 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.254 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.254 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.254 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.254 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.254 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.255 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.255 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.255 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.255 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.255 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.255 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.255 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.255 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.255 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.255 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.255 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.255 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.255 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.255 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.255 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.255 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.255 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.255 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.255 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.255 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.255 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.255 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.255 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.255 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.255 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.255 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.255 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.255 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.255 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.255 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.255 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.255 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.255 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.255 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.255 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.255 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.255 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.255 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.255 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.255 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.255 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.255 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.255 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.255 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.255 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.255 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.255 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.255 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.255 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.255 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.255 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.255 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.255 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.255 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.255 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.255 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.255 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.255 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.255 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.255 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.255 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.255 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.255 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.255 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.255 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.255 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.255 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.255 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.255 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.255 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.255 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.255 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.255 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.255 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.255 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.255 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.255 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.255 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.255 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.255 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.255 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.255 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.255 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.255 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.255 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.255 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.255 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.255 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.255 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.255 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.255 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.255 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.255 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.255 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.255 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.255 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.255 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.255 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.255 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.255 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.255 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.255 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.255 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.255 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.255 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.255 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.255 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.255 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.255 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.255 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.255 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.255 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.255 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.255 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.255 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.255 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.255 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.255 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.255 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.255 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:24.255 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:24.256 09:19:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:24.256 09:19:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:24.256 09:19:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:24.256 09:19:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:24.256 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:24.256 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:04:24.256 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:24.256 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:24.256 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:24.256 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:24.256 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:24.256 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:24.256 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:24.256 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.256 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.256 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27706372 kB' 'MemFree: 15539992 kB' 'MemUsed: 12166380 kB' 'SwapCached: 0 kB' 'Active: 5786468 kB' 'Inactive: 3385432 kB' 'Active(anon): 5553248 kB' 'Inactive(anon): 0 kB' 'Active(file): 233220 kB' 'Inactive(file): 3385432 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8916896 kB' 'Mapped: 67780 kB' 'AnonPages: 255012 kB' 'Shmem: 5298244 kB' 'KernelStack: 8904 kB' 'PageTables: 4480 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 107104 kB' 'Slab: 283080 kB' 'SReclaimable: 107104 kB' 'SUnreclaim: 175976 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:24.256 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.256 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.256 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.256 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.256 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.256 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.256 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.256 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.256 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.256 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.256 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.256 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.256 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.256 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.256 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.256 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.256 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.256 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.256 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.256 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.256 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.256 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.256 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.256 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.256 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.256 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.256 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.256 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.256 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.256 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.256 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.256 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.256 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.256 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.256 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.256 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.256 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.256 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.256 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.256 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.256 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.256 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.256 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.256 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.256 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.256 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.256 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.256 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.256 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.256 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.256 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.256 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.256 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.256 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.256 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.256 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.256 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.256 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.256 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.256 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.256 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.256 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.256 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.256 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.256 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.256 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.256 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.256 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.256 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.256 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.256 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.256 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.256 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.256 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.256 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.256 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.257 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.257 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.257 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.257 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.257 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.257 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.257 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.257 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.257 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.257 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.257 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.257 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.257 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.257 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.257 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.257 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.257 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.257 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.257 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.257 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.257 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.257 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.257 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.257 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.257 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.257 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.257 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.257 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.257 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.257 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.257 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.257 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.257 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.257 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.257 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.257 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.257 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.257 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.257 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.257 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.257 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.257 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.257 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.257 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.257 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.257 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.257 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.257 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.257 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.257 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.257 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.257 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.257 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.257 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.257 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.257 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.257 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.257 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.257 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.257 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.257 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.257 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.257 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.257 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.257 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.257 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.257 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.257 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.257 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.257 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:24.257 09:19:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:24.257 09:19:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:24.257 09:19:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:24.257 09:19:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:24.257 09:19:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:24.257 09:19:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:24.257 node0=512 expecting 512 00:04:24.257 09:19:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:24.257 09:19:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:24.257 09:19:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:24.257 09:19:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:04:24.257 node1=1024 expecting 1024 00:04:24.257 09:19:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:04:24.257 00:04:24.257 real 0m3.187s 00:04:24.257 user 0m1.273s 00:04:24.257 sys 0m1.989s 00:04:24.257 09:19:36 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:24.257 09:19:36 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:24.257 ************************************ 00:04:24.257 END TEST custom_alloc 00:04:24.257 ************************************ 00:04:24.257 09:19:36 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:24.257 09:19:36 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:24.257 09:19:36 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:24.257 09:19:36 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:24.257 ************************************ 00:04:24.257 START TEST no_shrink_alloc 00:04:24.257 ************************************ 00:04:24.257 09:19:36 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1125 -- # no_shrink_alloc 00:04:24.257 09:19:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:24.257 09:19:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:24.257 09:19:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:24.257 09:19:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:04:24.257 09:19:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:24.257 09:19:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:24.257 09:19:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:24.257 09:19:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:24.257 09:19:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:24.257 09:19:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:24.257 09:19:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:24.257 09:19:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:24.257 09:19:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:24.257 09:19:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:24.257 09:19:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:24.257 09:19:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:24.257 09:19:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:24.257 09:19:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:24.257 09:19:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:24.257 09:19:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:04:24.257 09:19:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:24.257 09:19:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:04:27.555 0000:dd:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:27.555 0000:df:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:27.555 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:27.556 0000:de:00.0 (8086 0953): Already using the vfio-pci driver 00:04:27.556 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:27.556 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:27.556 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:27.556 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:27.556 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:27.556 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:27.556 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:27.556 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:27.556 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:27.556 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:27.556 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:27.556 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:27.556 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:27.556 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:27.556 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:27.556 0000:dc:00.0 (8086 0953): Already using the vfio-pci driver 00:04:27.556 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:27.556 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:27.556 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:27.556 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:27.556 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:27.556 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:27.556 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:27.556 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:27.556 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:27.556 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:27.556 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:27.556 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:27.556 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:27.556 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:27.556 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:27.556 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:27.556 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:27.556 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:27.556 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.556 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.556 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60294968 kB' 'MemFree: 44227700 kB' 'MemAvailable: 47697920 kB' 'Buffers: 12144 kB' 'Cached: 10859780 kB' 'SwapCached: 0 kB' 'Active: 8314636 kB' 'Inactive: 3449392 kB' 'Active(anon): 7895084 kB' 'Inactive(anon): 0 kB' 'Active(file): 419552 kB' 'Inactive(file): 3449392 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 895336 kB' 'Mapped: 154084 kB' 'Shmem: 7002980 kB' 'KReclaimable: 196424 kB' 'Slab: 589740 kB' 'SReclaimable: 196424 kB' 'SUnreclaim: 393316 kB' 'KernelStack: 19152 kB' 'PageTables: 8924 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487512 kB' 'Committed_AS: 9734148 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 208200 kB' 'VmallocChunk: 0 kB' 'Percpu: 62208 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 498644 kB' 'DirectMap2M: 12812288 kB' 'DirectMap1G: 55574528 kB' 00:04:27.556 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.556 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.556 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.556 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.556 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.556 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.556 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.556 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.556 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.556 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.556 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.556 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.556 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.556 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.556 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.556 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.556 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.556 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.556 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.556 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.556 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.556 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.556 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.556 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.556 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.556 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.556 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.556 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.556 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.556 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.556 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.556 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.556 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.556 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.556 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.556 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.556 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.556 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.556 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.556 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.556 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.556 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.556 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.556 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.556 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.556 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.556 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.556 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.556 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.556 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.556 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.556 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.556 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.556 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.556 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.556 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.556 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.556 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.556 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.556 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.556 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.556 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.556 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.556 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.556 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.556 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.556 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.556 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.556 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.556 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.556 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.556 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.556 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.556 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.556 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.556 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.556 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.556 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.556 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.556 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.556 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.556 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.556 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.556 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.556 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.556 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.556 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.556 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.556 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.556 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.556 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.556 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.556 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.556 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.556 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.556 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.556 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.556 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.556 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.556 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.556 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.556 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.556 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.556 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.556 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.556 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.556 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.556 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.556 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.556 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.556 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.556 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.556 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.556 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60294968 kB' 'MemFree: 44232400 kB' 'MemAvailable: 47702620 kB' 'Buffers: 12144 kB' 'Cached: 10859784 kB' 'SwapCached: 0 kB' 'Active: 8315748 kB' 'Inactive: 3449392 kB' 'Active(anon): 7896196 kB' 'Inactive(anon): 0 kB' 'Active(file): 419552 kB' 'Inactive(file): 3449392 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 896944 kB' 'Mapped: 154084 kB' 'Shmem: 7002984 kB' 'KReclaimable: 196424 kB' 'Slab: 589696 kB' 'SReclaimable: 196424 kB' 'SUnreclaim: 393272 kB' 'KernelStack: 19136 kB' 'PageTables: 8852 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487512 kB' 'Committed_AS: 9734168 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 208168 kB' 'VmallocChunk: 0 kB' 'Percpu: 62208 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 498644 kB' 'DirectMap2M: 12812288 kB' 'DirectMap1G: 55574528 kB' 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.557 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.558 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.558 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.558 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.558 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.558 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.558 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.558 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.558 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.558 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.558 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.558 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.558 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.558 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.558 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.558 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.558 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.558 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.558 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.558 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.558 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.558 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.558 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.558 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.558 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.558 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.558 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.558 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.558 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.558 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.558 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.558 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.558 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.558 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.558 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.558 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.558 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.558 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.558 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.558 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.558 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.558 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.558 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.558 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.558 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.558 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.558 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.558 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.558 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.558 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.558 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.558 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.558 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.558 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.558 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.558 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.558 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:27.558 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:27.558 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:27.558 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:27.558 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:27.558 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:27.558 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:27.558 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:27.558 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:27.558 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:27.558 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:27.558 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:27.558 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:27.558 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.558 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.558 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60294968 kB' 'MemFree: 44232064 kB' 'MemAvailable: 47702284 kB' 'Buffers: 12144 kB' 'Cached: 10859800 kB' 'SwapCached: 0 kB' 'Active: 8315232 kB' 'Inactive: 3449392 kB' 'Active(anon): 7895680 kB' 'Inactive(anon): 0 kB' 'Active(file): 419552 kB' 'Inactive(file): 3449392 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 896356 kB' 'Mapped: 154008 kB' 'Shmem: 7003000 kB' 'KReclaimable: 196424 kB' 'Slab: 589756 kB' 'SReclaimable: 196424 kB' 'SUnreclaim: 393332 kB' 'KernelStack: 19152 kB' 'PageTables: 8916 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487512 kB' 'Committed_AS: 9734188 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 208168 kB' 'VmallocChunk: 0 kB' 'Percpu: 62208 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 498644 kB' 'DirectMap2M: 12812288 kB' 'DirectMap1G: 55574528 kB' 00:04:27.558 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.558 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.558 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.558 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.558 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.558 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.558 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.558 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.558 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.558 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.558 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.558 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.558 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.558 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.558 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.558 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.558 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.558 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.558 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.558 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.558 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.558 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.558 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.558 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.558 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.558 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.558 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.558 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.558 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.558 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.558 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.558 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.558 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.558 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.558 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.558 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.558 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.558 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.558 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.558 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.558 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.558 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.558 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.558 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.558 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.558 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.558 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.558 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.558 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.558 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.558 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.558 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.558 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.558 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.558 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.558 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.558 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.558 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.558 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.558 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.558 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.558 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.558 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.558 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.558 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.558 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.558 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.558 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.558 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.558 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.558 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.558 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.558 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.558 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.558 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.558 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.558 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.558 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.558 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.558 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.558 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.558 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.558 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.558 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.558 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.558 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.558 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.558 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.558 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.558 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.558 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.558 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.558 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.558 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.558 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.558 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.558 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.558 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.558 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.558 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.558 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.558 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.558 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.558 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.558 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.558 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.558 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.559 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.559 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.559 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.559 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.559 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.559 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.559 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.559 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.559 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.559 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.559 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.559 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.559 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.559 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.559 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.559 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.559 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.559 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.559 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.559 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.559 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.559 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.559 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.559 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.559 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.559 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.559 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.559 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.559 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.559 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.559 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.559 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.559 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.559 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.559 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.559 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.559 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.559 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.559 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.559 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.559 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.559 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.559 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.559 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.559 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.559 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.559 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.559 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.559 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.559 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.559 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.559 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.559 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.559 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.559 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.559 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.559 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.559 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.559 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.559 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.559 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.559 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.559 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.559 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.559 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.559 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.559 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.559 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.559 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.559 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.559 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.559 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.559 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.559 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.559 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.559 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.559 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.559 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.559 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.559 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.559 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.559 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.559 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.559 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.559 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.559 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.559 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.559 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.559 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.559 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.559 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.559 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.559 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.559 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.559 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:27.559 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:27.559 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:27.559 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:27.559 nr_hugepages=1024 00:04:27.559 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:27.559 resv_hugepages=0 00:04:27.559 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:27.559 surplus_hugepages=0 00:04:27.559 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:27.559 anon_hugepages=0 00:04:27.559 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:27.559 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:27.559 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:27.559 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:27.559 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:27.559 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:27.559 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:27.559 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:27.559 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:27.559 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:27.559 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:27.559 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:27.559 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.559 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.559 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60294968 kB' 'MemFree: 44232936 kB' 'MemAvailable: 47703156 kB' 'Buffers: 12144 kB' 'Cached: 10859840 kB' 'SwapCached: 0 kB' 'Active: 8315000 kB' 'Inactive: 3449392 kB' 'Active(anon): 7895448 kB' 'Inactive(anon): 0 kB' 'Active(file): 419552 kB' 'Inactive(file): 3449392 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 896080 kB' 'Mapped: 154008 kB' 'Shmem: 7003040 kB' 'KReclaimable: 196424 kB' 'Slab: 589756 kB' 'SReclaimable: 196424 kB' 'SUnreclaim: 393332 kB' 'KernelStack: 19152 kB' 'PageTables: 8916 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487512 kB' 'Committed_AS: 9734212 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 208168 kB' 'VmallocChunk: 0 kB' 'Percpu: 62208 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 498644 kB' 'DirectMap2M: 12812288 kB' 'DirectMap1G: 55574528 kB' 00:04:27.559 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.559 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.559 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.559 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.559 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.559 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.559 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.559 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.559 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.559 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.559 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.559 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.559 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.559 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.559 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.559 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.559 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.559 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.559 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.559 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.559 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.559 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.559 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.559 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.559 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.559 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.559 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.559 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.559 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.559 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.559 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.559 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.559 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.559 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.559 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.559 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.559 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.559 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.559 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.559 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.559 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.559 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.559 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.559 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.559 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.559 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.559 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.559 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.559 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.559 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.559 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.559 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.559 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.559 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.559 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.559 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.559 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.559 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.559 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.559 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.559 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.559 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.559 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.559 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.559 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.559 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.559 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.559 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.559 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.559 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.559 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.559 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.559 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32588596 kB' 'MemFree: 26588800 kB' 'MemUsed: 5999796 kB' 'SwapCached: 0 kB' 'Active: 2528248 kB' 'Inactive: 63960 kB' 'Active(anon): 2341916 kB' 'Inactive(anon): 0 kB' 'Active(file): 186332 kB' 'Inactive(file): 63960 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1954984 kB' 'Mapped: 86288 kB' 'AnonPages: 640752 kB' 'Shmem: 1704692 kB' 'KernelStack: 10280 kB' 'PageTables: 4528 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 89320 kB' 'Slab: 306956 kB' 'SReclaimable: 89320 kB' 'SUnreclaim: 217636 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.560 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.561 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.561 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.561 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.561 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.561 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.561 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.561 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.561 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.561 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.561 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.561 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.561 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.561 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.561 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.561 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.561 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.561 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.561 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.561 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.561 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.561 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.561 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.561 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.561 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.561 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.561 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.561 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.561 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.561 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.561 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.561 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.561 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.561 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.561 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.561 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.561 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.561 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.561 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.561 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.561 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.561 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.561 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.561 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.561 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.561 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.561 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.561 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.561 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.561 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.561 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.561 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.561 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.561 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.561 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.561 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.561 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.561 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.561 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.561 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.561 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.561 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.561 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.561 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.561 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.561 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.561 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.561 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.561 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.561 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.561 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.561 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.561 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.561 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.561 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.561 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.561 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.561 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.561 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:27.561 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:27.561 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:27.561 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:27.561 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:27.561 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:27.561 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:27.561 node0=1024 expecting 1024 00:04:27.561 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:27.561 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:27.561 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:27.561 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:04:27.561 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:27.561 09:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:04:30.857 0000:dd:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:30.857 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:30.857 0000:df:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:30.857 0000:de:00.0 (8086 0953): Already using the vfio-pci driver 00:04:30.857 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:30.857 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:30.857 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:30.857 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:30.857 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:30.857 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:30.857 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:30.857 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:30.857 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:30.857 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:30.857 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:30.857 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:30.857 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:30.857 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:30.857 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:30.857 0000:dc:00.0 (8086 0953): Already using the vfio-pci driver 00:04:30.857 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:30.857 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:30.857 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:30.857 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:30.857 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:30.857 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:30.857 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:30.857 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:30.857 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:30.857 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:30.857 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:30.857 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:30.857 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:30.857 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:30.857 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:30.857 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:30.857 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:30.857 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:30.857 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:30.857 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.857 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.857 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60294968 kB' 'MemFree: 44240544 kB' 'MemAvailable: 47710712 kB' 'Buffers: 12144 kB' 'Cached: 10859928 kB' 'SwapCached: 0 kB' 'Active: 8316264 kB' 'Inactive: 3449392 kB' 'Active(anon): 7896712 kB' 'Inactive(anon): 0 kB' 'Active(file): 419552 kB' 'Inactive(file): 3449392 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 896952 kB' 'Mapped: 154120 kB' 'Shmem: 7003128 kB' 'KReclaimable: 196320 kB' 'Slab: 589860 kB' 'SReclaimable: 196320 kB' 'SUnreclaim: 393540 kB' 'KernelStack: 19152 kB' 'PageTables: 8540 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487512 kB' 'Committed_AS: 9734844 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 208168 kB' 'VmallocChunk: 0 kB' 'Percpu: 62208 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 498644 kB' 'DirectMap2M: 12812288 kB' 'DirectMap1G: 55574528 kB' 00:04:30.857 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.857 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.857 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.857 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.857 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.857 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.857 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.857 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.857 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.857 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.857 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.857 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.857 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.857 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.857 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.857 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.857 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.857 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.857 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.857 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.857 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.858 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.858 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.858 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.858 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.858 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.858 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.858 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.858 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.858 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.858 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.858 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.858 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.858 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.858 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.858 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.858 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.858 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.858 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.858 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.858 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.858 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.858 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.858 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.858 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.858 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.858 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.858 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.858 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.858 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.858 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.858 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.858 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.858 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.858 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.858 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.858 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.858 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.858 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.858 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.858 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.858 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.858 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.858 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.858 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.858 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.858 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.858 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.858 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.858 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.858 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.858 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.858 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.858 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.858 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.858 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.858 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.858 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.858 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.858 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.858 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.858 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.858 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.858 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.858 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.858 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.858 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.858 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.858 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.858 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.858 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.858 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.858 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.858 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.858 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.858 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.858 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.858 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.858 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.858 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.858 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.858 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.858 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.858 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.858 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.858 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.858 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.858 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.858 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.858 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.858 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.858 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.858 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.858 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.858 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.858 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.858 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.858 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.858 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.858 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.858 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.858 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.858 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.858 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.858 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.858 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.858 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.858 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.858 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.859 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.859 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.859 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.859 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.859 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.859 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.859 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.859 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.859 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.859 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.859 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.859 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.859 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.859 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.859 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.859 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.859 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.859 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.859 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.859 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.859 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.859 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.859 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.859 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.859 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.859 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.859 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.859 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.859 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.859 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.859 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.859 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.859 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:30.859 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:30.859 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:30.859 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:30.859 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:30.859 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:30.859 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:30.859 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:30.859 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:30.859 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:30.859 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:30.859 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:30.859 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:30.859 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.859 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.859 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60294968 kB' 'MemFree: 44241108 kB' 'MemAvailable: 47711276 kB' 'Buffers: 12144 kB' 'Cached: 10859932 kB' 'SwapCached: 0 kB' 'Active: 8315944 kB' 'Inactive: 3449392 kB' 'Active(anon): 7896392 kB' 'Inactive(anon): 0 kB' 'Active(file): 419552 kB' 'Inactive(file): 3449392 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 896568 kB' 'Mapped: 154116 kB' 'Shmem: 7003132 kB' 'KReclaimable: 196320 kB' 'Slab: 589824 kB' 'SReclaimable: 196320 kB' 'SUnreclaim: 393504 kB' 'KernelStack: 19120 kB' 'PageTables: 8412 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487512 kB' 'Committed_AS: 9734860 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 208136 kB' 'VmallocChunk: 0 kB' 'Percpu: 62208 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 498644 kB' 'DirectMap2M: 12812288 kB' 'DirectMap1G: 55574528 kB' 00:04:30.859 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.859 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.859 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.859 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.859 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.859 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.859 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.859 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.859 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.859 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.859 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.859 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.859 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.859 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.859 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.859 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.859 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.859 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.859 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.859 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.859 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.859 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.859 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.859 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.859 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.859 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.859 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.859 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.859 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.859 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.859 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.859 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.859 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.859 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.859 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.859 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.859 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.859 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.859 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.859 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.859 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.859 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.859 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.859 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.859 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.859 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.859 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.859 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.859 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.859 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.859 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.859 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.859 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.859 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.859 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.859 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.859 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.860 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.860 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.860 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.860 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.860 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.860 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.860 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.860 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.860 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.860 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.860 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.860 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.860 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.860 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.860 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.860 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.860 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.860 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.860 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.860 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.860 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.860 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.860 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.860 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.860 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.860 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.860 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.860 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.860 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.860 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.860 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.860 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.860 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.860 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.860 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.860 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.860 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.860 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.860 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.860 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.860 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.860 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.860 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.860 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.860 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.860 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.860 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.860 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.860 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.860 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.860 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.860 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.860 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.860 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.860 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.860 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.860 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.860 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.860 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.860 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.860 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.860 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.860 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.860 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.860 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.860 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.860 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.860 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.860 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.860 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.860 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.860 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.860 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.860 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.860 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.860 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.860 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.860 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.860 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.860 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.860 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.860 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.860 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.860 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.860 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.860 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.860 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.860 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.860 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.860 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.860 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.860 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.860 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.860 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.860 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.860 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.860 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.860 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.860 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.860 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.860 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.860 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.860 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.860 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.860 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.860 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.860 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.860 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.860 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.860 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.860 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.860 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.860 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.860 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.860 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.860 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.860 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.860 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.860 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.861 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.861 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.861 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.861 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.861 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.861 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.861 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.861 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.861 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.861 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.861 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.861 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.861 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.861 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.861 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.861 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.861 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.861 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.861 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.861 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.861 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.861 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.861 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.861 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.861 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.861 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.861 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.861 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.861 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.861 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:30.861 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:30.861 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:30.861 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:30.861 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:30.861 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:30.861 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:30.861 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:30.861 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:30.861 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:30.861 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:30.861 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:30.861 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:30.861 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.861 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.861 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60294968 kB' 'MemFree: 44241612 kB' 'MemAvailable: 47711780 kB' 'Buffers: 12144 kB' 'Cached: 10859932 kB' 'SwapCached: 0 kB' 'Active: 8316448 kB' 'Inactive: 3449392 kB' 'Active(anon): 7896896 kB' 'Inactive(anon): 0 kB' 'Active(file): 419552 kB' 'Inactive(file): 3449392 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 897072 kB' 'Mapped: 154116 kB' 'Shmem: 7003132 kB' 'KReclaimable: 196320 kB' 'Slab: 589824 kB' 'SReclaimable: 196320 kB' 'SUnreclaim: 393504 kB' 'KernelStack: 19120 kB' 'PageTables: 8412 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487512 kB' 'Committed_AS: 9734884 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 208136 kB' 'VmallocChunk: 0 kB' 'Percpu: 62208 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 498644 kB' 'DirectMap2M: 12812288 kB' 'DirectMap1G: 55574528 kB' 00:04:30.861 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.861 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.861 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.861 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.861 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.861 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.861 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.861 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.861 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.861 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.861 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.861 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.861 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.861 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.861 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.861 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.861 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.861 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.861 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.861 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.861 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.861 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.861 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.861 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.861 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.861 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.861 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.861 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.861 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.861 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.861 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.861 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.861 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.861 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.861 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.861 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.861 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.861 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.861 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.861 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.861 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.861 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.861 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.861 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.861 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.861 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.861 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.861 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.861 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.861 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.861 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.861 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.861 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.861 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.861 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.861 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.861 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.861 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.861 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.861 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.862 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.862 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.862 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.862 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.862 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.862 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.862 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.862 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.862 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.862 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.862 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.862 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.862 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.862 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.862 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.862 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.862 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.862 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.862 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.862 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.862 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.862 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.862 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.862 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.862 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.862 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.862 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.862 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.862 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.862 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.862 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.862 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.862 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.862 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.862 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.862 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.862 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.862 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.862 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.862 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.862 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.862 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.862 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.862 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.862 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.862 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.862 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.862 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.862 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.862 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.862 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.862 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.862 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.862 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.862 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.862 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.862 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.862 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.862 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.862 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.862 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.862 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.862 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.862 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.862 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.862 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.862 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.862 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.862 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.862 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.862 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.862 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.862 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.862 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.862 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.862 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.862 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.862 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.862 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.862 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.862 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.862 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.862 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.862 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.862 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.862 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.862 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.862 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.862 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.862 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.862 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.862 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.862 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.862 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.862 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.862 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.862 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.863 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.863 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.863 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.863 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.863 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.863 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.863 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.863 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.863 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.863 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.863 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.863 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.863 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.863 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.863 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.863 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.863 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.863 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.863 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.863 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.863 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.863 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.863 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.863 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.863 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.863 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.863 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.863 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.863 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.863 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.863 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.863 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.863 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.863 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.863 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.863 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.863 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.863 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.863 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.863 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.863 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.863 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.863 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.863 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.863 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:30.863 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:30.863 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:30.863 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:30.863 nr_hugepages=1024 00:04:30.863 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:30.863 resv_hugepages=0 00:04:30.863 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:30.863 surplus_hugepages=0 00:04:30.863 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:30.863 anon_hugepages=0 00:04:30.863 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:30.863 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:30.863 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:30.863 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:30.863 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:30.863 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:30.863 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:30.863 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:30.863 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:30.863 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:30.863 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:30.863 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:30.863 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60294968 kB' 'MemFree: 44242376 kB' 'MemAvailable: 47712544 kB' 'Buffers: 12144 kB' 'Cached: 10859988 kB' 'SwapCached: 0 kB' 'Active: 8316124 kB' 'Inactive: 3449392 kB' 'Active(anon): 7896572 kB' 'Inactive(anon): 0 kB' 'Active(file): 419552 kB' 'Inactive(file): 3449392 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 896692 kB' 'Mapped: 154116 kB' 'Shmem: 7003188 kB' 'KReclaimable: 196320 kB' 'Slab: 589872 kB' 'SReclaimable: 196320 kB' 'SUnreclaim: 393552 kB' 'KernelStack: 19104 kB' 'PageTables: 8380 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487512 kB' 'Committed_AS: 9734908 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 208136 kB' 'VmallocChunk: 0 kB' 'Percpu: 62208 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 498644 kB' 'DirectMap2M: 12812288 kB' 'DirectMap1G: 55574528 kB' 00:04:30.863 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.863 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.863 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.863 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.863 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.863 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.863 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.863 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.863 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.863 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.863 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.863 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.863 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.863 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.863 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.863 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.863 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.863 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.863 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.863 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.863 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.863 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.863 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.863 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.863 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.863 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.863 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.863 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.863 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.863 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.863 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.863 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.863 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.863 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.863 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.863 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.863 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.863 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.863 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.863 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.864 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.864 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.864 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.864 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.864 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.864 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.864 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.864 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.864 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.864 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.864 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.864 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.864 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.864 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.864 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.864 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.864 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.864 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.864 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.864 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.864 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.864 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.864 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.864 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.864 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.864 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.864 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.864 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.864 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.864 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.864 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.864 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.864 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.864 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.864 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.864 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.864 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.864 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.864 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.864 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.864 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.864 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.864 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.864 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.864 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.864 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.864 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.864 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.864 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.864 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.864 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.864 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.864 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.864 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.864 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.864 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.864 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.864 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.864 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.864 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.864 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.864 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.864 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.864 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.864 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.864 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.864 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.864 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.864 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.864 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.864 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.864 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.864 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.864 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.864 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.864 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.864 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.864 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.864 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.864 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.864 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.864 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.864 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.864 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.864 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.864 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.864 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.864 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.864 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.864 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.864 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.864 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.864 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.864 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.864 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.864 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.864 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.864 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.864 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.864 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.864 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.864 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.864 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.864 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.864 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.864 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.864 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.864 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.864 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.864 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.864 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.864 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.864 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.864 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.864 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.864 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.864 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.864 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.865 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.865 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.865 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.865 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.865 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.865 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.865 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.865 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.865 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.865 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.865 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.865 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.865 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.865 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.865 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.865 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.865 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.865 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.865 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.865 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.865 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.865 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.865 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.865 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.865 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.865 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.865 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.865 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.865 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.865 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.865 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.865 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.865 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.865 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.865 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.865 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.865 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.865 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:30.865 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:30.865 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:30.865 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:30.865 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:30.865 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:30.865 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:30.865 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:30.865 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:30.865 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:30.865 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:30.865 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:30.865 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:30.865 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:30.865 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:30.865 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:30.865 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:30.865 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:30.865 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:30.865 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:30.865 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:30.865 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:30.865 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:30.865 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.865 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.865 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32588596 kB' 'MemFree: 26610164 kB' 'MemUsed: 5978432 kB' 'SwapCached: 0 kB' 'Active: 2529708 kB' 'Inactive: 63960 kB' 'Active(anon): 2343376 kB' 'Inactive(anon): 0 kB' 'Active(file): 186332 kB' 'Inactive(file): 63960 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1955032 kB' 'Mapped: 86336 kB' 'AnonPages: 641872 kB' 'Shmem: 1704740 kB' 'KernelStack: 10248 kB' 'PageTables: 4044 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 89224 kB' 'Slab: 306956 kB' 'SReclaimable: 89224 kB' 'SUnreclaim: 217732 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:30.865 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.865 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.865 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.865 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.865 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.865 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.865 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.865 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.865 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.865 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.865 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.865 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.865 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.865 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.865 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.865 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.865 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.865 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.865 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.865 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.865 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.865 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.865 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.865 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.865 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.865 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.865 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.865 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.865 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.865 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.865 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.865 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.865 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.865 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.865 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.865 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.865 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.865 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.865 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.865 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.865 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.865 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.865 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.865 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.865 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.865 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.865 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.866 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.866 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.866 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.866 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.866 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.866 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.866 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.866 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.866 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.866 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.866 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.866 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.866 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.866 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.866 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.866 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.866 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.866 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.866 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.866 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.866 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.866 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.866 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.866 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.866 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.866 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.866 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.866 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.866 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.866 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.866 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.866 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.866 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.866 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.866 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.866 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.866 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.866 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.866 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.866 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.866 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.866 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.866 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.866 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.866 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.866 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.866 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.866 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.866 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.866 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.866 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.866 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.866 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.866 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.866 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.866 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.866 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.866 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.866 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.866 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.866 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.866 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.866 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.866 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.866 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.866 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.866 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.866 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.866 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.866 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.866 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.866 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.866 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.866 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.866 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.866 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.866 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.866 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.866 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.866 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.866 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.866 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.866 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.866 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.866 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.866 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.866 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.866 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.866 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.866 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.867 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.867 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.867 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.867 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.867 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.867 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.867 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.867 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.867 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:30.867 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:30.867 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:30.867 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:30.867 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:30.867 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:30.867 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:30.867 node0=1024 expecting 1024 00:04:30.867 09:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:30.867 00:04:30.867 real 0m6.535s 00:04:30.867 user 0m2.512s 00:04:30.867 sys 0m4.162s 00:04:30.867 09:19:43 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:30.867 09:19:43 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:30.867 ************************************ 00:04:30.867 END TEST no_shrink_alloc 00:04:30.867 ************************************ 00:04:30.867 09:19:43 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:04:30.867 09:19:43 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:30.867 09:19:43 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:30.867 09:19:43 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:30.867 09:19:43 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:30.867 09:19:43 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:30.867 09:19:43 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:30.867 09:19:43 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:30.867 09:19:43 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:30.867 09:19:43 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:30.867 09:19:43 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:30.867 09:19:43 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:30.867 09:19:43 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:30.867 09:19:43 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:30.867 00:04:30.867 real 0m25.543s 00:04:30.867 user 0m9.102s 00:04:30.867 sys 0m14.766s 00:04:30.867 09:19:43 setup.sh.hugepages -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:30.867 09:19:43 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:30.867 ************************************ 00:04:30.867 END TEST hugepages 00:04:30.867 ************************************ 00:04:30.867 09:19:43 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/driver.sh 00:04:30.867 09:19:43 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:30.867 09:19:43 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:30.867 09:19:43 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:30.867 ************************************ 00:04:30.867 START TEST driver 00:04:30.867 ************************************ 00:04:30.867 09:19:43 setup.sh.driver -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/driver.sh 00:04:30.867 * Looking for test storage... 00:04:30.867 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:04:30.867 09:19:43 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:04:30.867 09:19:43 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:30.867 09:19:43 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:04:43.078 09:19:53 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:43.078 09:19:53 setup.sh.driver -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:43.078 09:19:53 setup.sh.driver -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:43.078 09:19:53 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:43.078 ************************************ 00:04:43.078 START TEST guess_driver 00:04:43.078 ************************************ 00:04:43.078 09:19:53 setup.sh.driver.guess_driver -- common/autotest_common.sh@1125 -- # guess_driver 00:04:43.078 09:19:53 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:43.078 09:19:53 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:04:43.078 09:19:53 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:04:43.078 09:19:53 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:04:43.078 09:19:53 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:04:43.078 09:19:53 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:43.078 09:19:53 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:43.078 09:19:53 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:04:43.078 09:19:53 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:43.078 09:19:53 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 198 > 0 )) 00:04:43.078 09:19:53 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:04:43.078 09:19:53 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:04:43.078 09:19:53 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:04:43.078 09:19:53 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:04:43.078 09:19:53 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:04:43.078 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:43.078 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:43.078 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:43.078 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:43.078 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:04:43.078 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:04:43.078 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:04:43.078 09:19:53 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:04:43.078 09:19:53 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:04:43.078 09:19:53 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:04:43.078 09:19:53 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:43.078 09:19:53 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:04:43.078 Looking for driver=vfio-pci 00:04:43.078 09:19:53 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:43.078 09:19:53 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:04:43.078 09:19:53 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:04:43.078 09:19:53 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:04:44.510 09:19:57 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:44.510 09:19:57 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:44.510 09:19:57 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:44.510 09:19:57 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:44.510 09:19:57 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:44.510 09:19:57 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:44.510 09:19:57 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:44.510 09:19:57 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:44.510 09:19:57 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:44.510 09:19:57 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:44.510 09:19:57 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:44.510 09:19:57 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:44.510 09:19:57 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:44.510 09:19:57 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:44.510 09:19:57 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:44.510 09:19:57 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:44.510 09:19:57 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:44.510 09:19:57 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:44.510 09:19:57 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:44.510 09:19:57 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:44.510 09:19:57 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:44.510 09:19:57 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:44.510 09:19:57 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:44.510 09:19:57 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:44.510 09:19:57 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:44.510 09:19:57 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:44.510 09:19:57 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:44.510 09:19:57 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:44.510 09:19:57 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:44.510 09:19:57 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:44.510 09:19:57 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:44.510 09:19:57 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:44.510 09:19:57 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:44.510 09:19:57 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:44.510 09:19:57 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:44.510 09:19:57 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:44.510 09:19:57 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:44.510 09:19:57 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:44.510 09:19:57 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:44.510 09:19:57 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:44.510 09:19:57 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:44.510 09:19:57 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:44.510 09:19:57 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:44.511 09:19:57 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:44.511 09:19:57 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:44.511 09:19:57 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:44.511 09:19:57 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:44.511 09:19:57 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:46.418 09:19:59 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:46.418 09:19:59 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:46.418 09:19:59 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:46.418 09:19:59 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:46.418 09:19:59 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:46.418 09:19:59 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:46.677 09:19:59 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:46.678 09:19:59 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:46.678 09:19:59 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:46.678 09:19:59 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:46.678 09:19:59 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:46.678 09:19:59 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:46.678 09:19:59 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:46.678 09:19:59 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:04:46.678 09:19:59 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:46.678 09:19:59 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:04:58.893 00:04:58.893 real 0m15.706s 00:04:58.893 user 0m2.676s 00:04:58.893 sys 0m4.713s 00:04:58.893 09:20:09 setup.sh.driver.guess_driver -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:58.893 09:20:09 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:04:58.893 ************************************ 00:04:58.893 END TEST guess_driver 00:04:58.893 ************************************ 00:04:58.893 00:04:58.893 real 0m26.187s 00:04:58.893 user 0m4.069s 00:04:58.893 sys 0m7.349s 00:04:58.893 09:20:09 setup.sh.driver -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:58.893 09:20:09 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:58.893 ************************************ 00:04:58.893 END TEST driver 00:04:58.893 ************************************ 00:04:58.893 09:20:09 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/devices.sh 00:04:58.893 09:20:09 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:58.893 09:20:09 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:58.893 09:20:09 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:58.893 ************************************ 00:04:58.893 START TEST devices 00:04:58.893 ************************************ 00:04:58.893 09:20:09 setup.sh.devices -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/devices.sh 00:04:58.893 * Looking for test storage... 00:04:58.893 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:04:58.893 09:20:09 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:58.893 09:20:09 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:04:58.893 09:20:09 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:58.893 09:20:09 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:05:00.273 09:20:12 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:05:00.273 09:20:12 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:05:00.273 09:20:12 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:05:00.273 09:20:12 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:05:00.273 09:20:12 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:00.273 09:20:12 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:05:00.273 09:20:12 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:05:00.273 09:20:12 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:00.273 09:20:12 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:00.273 09:20:12 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:00.273 09:20:12 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:05:00.273 09:20:12 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:05:00.273 09:20:12 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:00.273 09:20:12 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:00.273 09:20:12 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:00.273 09:20:12 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n1 00:05:00.273 09:20:12 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme2n1 00:05:00.273 09:20:12 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:05:00.273 09:20:12 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:00.273 09:20:12 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:00.273 09:20:12 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3n1 00:05:00.273 09:20:12 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme3n1 00:05:00.273 09:20:12 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:05:00.273 09:20:12 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:00.273 09:20:12 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:05:00.273 09:20:12 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:05:00.273 09:20:12 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:05:00.273 09:20:12 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:05:00.273 09:20:12 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:05:00.273 09:20:12 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:00.273 09:20:12 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:05:00.273 09:20:12 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:05:00.273 09:20:12 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:dd:00.0 00:05:00.274 09:20:12 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\d\d\:\0\0\.\0* ]] 00:05:00.274 09:20:12 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:05:00.274 09:20:12 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:05:00.274 09:20:12 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:05:00.274 No valid GPT data, bailing 00:05:00.274 09:20:12 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:00.274 09:20:13 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:05:00.274 09:20:13 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:05:00.274 09:20:13 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:05:00.274 09:20:13 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:05:00.274 09:20:13 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:05:00.274 09:20:13 setup.sh.devices -- setup/common.sh@80 -- # echo 2000398934016 00:05:00.274 09:20:13 setup.sh.devices -- setup/devices.sh@204 -- # (( 2000398934016 >= min_disk_size )) 00:05:00.274 09:20:13 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:00.274 09:20:13 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:dd:00.0 00:05:00.274 09:20:13 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:00.274 09:20:13 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:05:00.274 09:20:13 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1 00:05:00.274 09:20:13 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:df:00.0 00:05:00.274 09:20:13 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\d\f\:\0\0\.\0* ]] 00:05:00.274 09:20:13 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:05:00.274 09:20:13 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:05:00.274 09:20:13 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/spdk-gpt.py nvme1n1 00:05:00.274 No valid GPT data, bailing 00:05:00.274 09:20:13 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:00.274 09:20:13 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:05:00.274 09:20:13 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:05:00.274 09:20:13 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:05:00.274 09:20:13 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme1n1 00:05:00.274 09:20:13 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:05:00.274 09:20:13 setup.sh.devices -- setup/common.sh@80 -- # echo 2000398934016 00:05:00.274 09:20:13 setup.sh.devices -- setup/devices.sh@204 -- # (( 2000398934016 >= min_disk_size )) 00:05:00.274 09:20:13 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:00.274 09:20:13 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:df:00.0 00:05:00.274 09:20:13 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:00.274 09:20:13 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2n1 00:05:00.274 09:20:13 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2 00:05:00.274 09:20:13 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:de:00.0 00:05:00.274 09:20:13 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\d\e\:\0\0\.\0* ]] 00:05:00.274 09:20:13 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme2n1 00:05:00.274 09:20:13 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme2n1 pt 00:05:00.274 09:20:13 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/spdk-gpt.py nvme2n1 00:05:00.534 No valid GPT data, bailing 00:05:00.534 09:20:13 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:05:00.534 09:20:13 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:05:00.534 09:20:13 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:05:00.534 09:20:13 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme2n1 00:05:00.534 09:20:13 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme2n1 00:05:00.534 09:20:13 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme2n1 ]] 00:05:00.534 09:20:13 setup.sh.devices -- setup/common.sh@80 -- # echo 400088457216 00:05:00.534 09:20:13 setup.sh.devices -- setup/devices.sh@204 -- # (( 400088457216 >= min_disk_size )) 00:05:00.534 09:20:13 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:00.534 09:20:13 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:de:00.0 00:05:00.534 09:20:13 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:00.534 09:20:13 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme3n1 00:05:00.534 09:20:13 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme3 00:05:00.534 09:20:13 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:dc:00.0 00:05:00.534 09:20:13 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\d\c\:\0\0\.\0* ]] 00:05:00.534 09:20:13 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme3n1 00:05:00.534 09:20:13 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme3n1 pt 00:05:00.534 09:20:13 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/spdk-gpt.py nvme3n1 00:05:00.534 No valid GPT data, bailing 00:05:00.534 09:20:13 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:05:00.534 09:20:13 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:05:00.534 09:20:13 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:05:00.534 09:20:13 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme3n1 00:05:00.534 09:20:13 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme3n1 00:05:00.534 09:20:13 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme3n1 ]] 00:05:00.534 09:20:13 setup.sh.devices -- setup/common.sh@80 -- # echo 400088457216 00:05:00.534 09:20:13 setup.sh.devices -- setup/devices.sh@204 -- # (( 400088457216 >= min_disk_size )) 00:05:00.534 09:20:13 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:00.534 09:20:13 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:dc:00.0 00:05:00.534 09:20:13 setup.sh.devices -- setup/devices.sh@209 -- # (( 4 > 0 )) 00:05:00.534 09:20:13 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:05:00.534 09:20:13 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:05:00.534 09:20:13 setup.sh.devices -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:00.534 09:20:13 setup.sh.devices -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:00.534 09:20:13 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:00.534 ************************************ 00:05:00.534 START TEST nvme_mount 00:05:00.534 ************************************ 00:05:00.534 09:20:13 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1125 -- # nvme_mount 00:05:00.534 09:20:13 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:05:00.534 09:20:13 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:05:00.534 09:20:13 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:05:00.534 09:20:13 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:00.534 09:20:13 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:05:00.534 09:20:13 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:00.534 09:20:13 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:05:00.534 09:20:13 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:05:00.534 09:20:13 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:00.534 09:20:13 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:05:00.534 09:20:13 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:05:00.534 09:20:13 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:05:00.534 09:20:13 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:00.534 09:20:13 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:00.534 09:20:13 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:00.534 09:20:13 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:00.534 09:20:13 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:05:00.534 09:20:13 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:00.534 09:20:13 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:05:01.476 Creating new GPT entries in memory. 00:05:01.476 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:01.476 other utilities. 00:05:01.476 09:20:14 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:05:01.476 09:20:14 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:01.476 09:20:14 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:01.476 09:20:14 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:01.476 09:20:14 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:05:02.857 Creating new GPT entries in memory. 00:05:02.857 The operation has completed successfully. 00:05:02.857 09:20:15 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:02.857 09:20:15 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:02.857 09:20:15 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 382579 00:05:02.857 09:20:15 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:05:02.857 09:20:15 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount size= 00:05:02.857 09:20:15 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:05:02.857 09:20:15 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:05:02.857 09:20:15 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:05:02.857 09:20:15 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:05:02.857 09:20:15 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:dd:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:02.857 09:20:15 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:dd:00.0 00:05:02.857 09:20:15 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:05:02.857 09:20:15 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:05:02.857 09:20:15 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:02.857 09:20:15 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:02.857 09:20:15 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:02.857 09:20:15 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:05:02.857 09:20:15 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:02.857 09:20:15 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.857 09:20:15 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:dd:00.0 00:05:02.857 09:20:15 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:02.857 09:20:15 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:02.857 09:20:15 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:05:06.149 09:20:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:dd:00.0 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:05:06.149 09:20:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:05:06.149 09:20:18 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:06.149 09:20:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.149 09:20:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:df:00.0 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:05:06.149 09:20:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.149 09:20:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:de:00.0 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:05:06.149 09:20:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.149 09:20:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:05:06.149 09:20:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.149 09:20:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:05:06.149 09:20:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.149 09:20:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:05:06.149 09:20:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.149 09:20:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:05:06.149 09:20:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.149 09:20:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:05:06.149 09:20:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.149 09:20:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:05:06.149 09:20:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.149 09:20:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:05:06.149 09:20:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.149 09:20:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:05:06.149 09:20:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.150 09:20:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:05:06.150 09:20:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.150 09:20:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:05:06.150 09:20:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.150 09:20:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:05:06.150 09:20:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.150 09:20:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:05:06.150 09:20:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.150 09:20:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:05:06.150 09:20:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.150 09:20:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:05:06.150 09:20:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.150 09:20:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:05:06.150 09:20:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.150 09:20:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:05:06.150 09:20:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.150 09:20:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:dc:00.0 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:05:06.150 09:20:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.150 09:20:18 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:06.150 09:20:18 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount ]] 00:05:06.150 09:20:18 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:05:06.150 09:20:18 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:06.150 09:20:18 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:06.150 09:20:18 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:05:06.150 09:20:18 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:05:06.150 09:20:18 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:05:06.150 09:20:18 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:06.150 09:20:18 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:06.150 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:06.150 09:20:18 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:06.150 09:20:18 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:06.409 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:05:06.409 /dev/nvme0n1: 8 bytes were erased at offset 0x1d1c1115e00 (gpt): 45 46 49 20 50 41 52 54 00:05:06.409 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:06.409 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:06.409 09:20:19 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:05:06.409 09:20:19 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:05:06.409 09:20:19 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:05:06.409 09:20:19 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:05:06.409 09:20:19 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:05:06.409 09:20:19 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:05:06.409 09:20:19 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:dd:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:06.409 09:20:19 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:dd:00.0 00:05:06.409 09:20:19 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:05:06.409 09:20:19 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:05:06.409 09:20:19 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:06.409 09:20:19 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:06.409 09:20:19 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:06.409 09:20:19 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:05:06.409 09:20:19 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:06.409 09:20:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.409 09:20:19 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:dd:00.0 00:05:06.409 09:20:19 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:06.409 09:20:19 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:06.409 09:20:19 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:05:08.943 09:20:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:dd:00.0 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:05:08.943 09:20:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:05:08.943 09:20:21 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:08.943 09:20:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:08.943 09:20:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:df:00.0 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:05:08.943 09:20:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.200 09:20:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:de:00.0 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:05:09.200 09:20:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.200 09:20:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:05:09.200 09:20:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.200 09:20:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:05:09.200 09:20:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.200 09:20:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:05:09.200 09:20:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.200 09:20:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:05:09.201 09:20:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.201 09:20:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:05:09.201 09:20:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.201 09:20:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:05:09.201 09:20:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.201 09:20:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:05:09.201 09:20:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.201 09:20:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:05:09.201 09:20:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.201 09:20:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:05:09.201 09:20:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.201 09:20:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:05:09.201 09:20:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.201 09:20:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:05:09.201 09:20:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.201 09:20:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:05:09.201 09:20:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.201 09:20:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:05:09.201 09:20:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.201 09:20:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:05:09.201 09:20:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.201 09:20:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:05:09.201 09:20:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.201 09:20:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:05:09.201 09:20:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.201 09:20:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:dc:00.0 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:05:09.201 09:20:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.458 09:20:22 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:09.458 09:20:22 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount ]] 00:05:09.458 09:20:22 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:05:09.716 09:20:22 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:09.716 09:20:22 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:09.716 09:20:22 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:05:09.716 09:20:22 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:dd:00.0 data@nvme0n1 '' '' 00:05:09.716 09:20:22 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:dd:00.0 00:05:09.716 09:20:22 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:05:09.716 09:20:22 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:09.716 09:20:22 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:05:09.716 09:20:22 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:09.716 09:20:22 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:09.716 09:20:22 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:09.716 09:20:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.716 09:20:22 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:dd:00.0 00:05:09.716 09:20:22 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:09.716 09:20:22 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:09.716 09:20:22 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:05:12.250 09:20:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:dd:00.0 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:05:12.250 09:20:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:05:12.250 09:20:24 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:12.250 09:20:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.250 09:20:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:df:00.0 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:05:12.250 09:20:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.509 09:20:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:de:00.0 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:05:12.509 09:20:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.509 09:20:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:05:12.509 09:20:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.509 09:20:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:05:12.509 09:20:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.509 09:20:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:05:12.509 09:20:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.509 09:20:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:05:12.509 09:20:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.509 09:20:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:05:12.509 09:20:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.509 09:20:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:05:12.509 09:20:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.509 09:20:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:05:12.509 09:20:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.509 09:20:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:05:12.509 09:20:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.509 09:20:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:05:12.509 09:20:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.509 09:20:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:05:12.509 09:20:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.509 09:20:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:05:12.509 09:20:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.509 09:20:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:05:12.509 09:20:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.509 09:20:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:05:12.509 09:20:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.509 09:20:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:05:12.509 09:20:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.509 09:20:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:05:12.509 09:20:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.509 09:20:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:05:12.509 09:20:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.509 09:20:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:dc:00.0 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:05:12.509 09:20:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.767 09:20:25 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:12.767 09:20:25 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:12.767 09:20:25 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:05:12.767 09:20:25 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:05:12.768 09:20:25 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:05:12.768 09:20:25 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:12.768 09:20:25 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:12.768 09:20:25 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:12.768 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:12.768 00:05:12.768 real 0m12.321s 00:05:12.768 user 0m3.789s 00:05:12.768 sys 0m6.271s 00:05:12.768 09:20:25 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:12.768 09:20:25 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:05:12.768 ************************************ 00:05:12.768 END TEST nvme_mount 00:05:12.768 ************************************ 00:05:12.768 09:20:25 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:05:12.768 09:20:25 setup.sh.devices -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:12.768 09:20:25 setup.sh.devices -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:12.768 09:20:25 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:13.026 ************************************ 00:05:13.026 START TEST dm_mount 00:05:13.026 ************************************ 00:05:13.026 09:20:25 setup.sh.devices.dm_mount -- common/autotest_common.sh@1125 -- # dm_mount 00:05:13.026 09:20:25 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:05:13.026 09:20:25 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:05:13.026 09:20:25 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:05:13.026 09:20:25 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:05:13.026 09:20:25 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:13.026 09:20:25 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:05:13.026 09:20:25 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:05:13.026 09:20:25 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:13.026 09:20:25 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:05:13.026 09:20:25 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:05:13.026 09:20:25 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:05:13.026 09:20:25 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:13.026 09:20:25 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:13.026 09:20:25 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:13.026 09:20:25 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:13.026 09:20:25 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:13.026 09:20:25 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:13.026 09:20:25 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:13.026 09:20:25 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:05:13.026 09:20:25 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:13.026 09:20:25 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:05:13.962 Creating new GPT entries in memory. 00:05:13.962 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:13.962 other utilities. 00:05:13.962 09:20:26 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:05:13.962 09:20:26 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:13.962 09:20:26 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:13.962 09:20:26 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:13.962 09:20:26 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:05:14.897 Creating new GPT entries in memory. 00:05:14.897 The operation has completed successfully. 00:05:14.897 09:20:27 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:14.897 09:20:27 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:14.897 09:20:27 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:14.897 09:20:27 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:14.897 09:20:27 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:05:16.274 The operation has completed successfully. 00:05:16.274 09:20:28 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:16.274 09:20:28 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:16.274 09:20:28 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 387389 00:05:16.274 09:20:28 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:05:16.274 09:20:28 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:05:16.274 09:20:28 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:16.274 09:20:28 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:05:16.274 09:20:28 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:05:16.274 09:20:28 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:16.274 09:20:28 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:05:16.274 09:20:28 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:16.274 09:20:28 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:05:16.274 09:20:28 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-2 00:05:16.274 09:20:28 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-2 00:05:16.274 09:20:28 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-2 ]] 00:05:16.274 09:20:28 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-2 ]] 00:05:16.274 09:20:28 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:05:16.274 09:20:28 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount size= 00:05:16.274 09:20:28 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:05:16.274 09:20:28 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:16.274 09:20:28 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:05:16.274 09:20:28 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:05:16.274 09:20:28 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:dd:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:16.274 09:20:28 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:dd:00.0 00:05:16.274 09:20:28 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:05:16.274 09:20:28 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:05:16.274 09:20:28 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:16.274 09:20:28 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:16.274 09:20:28 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:16.274 09:20:28 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:05:16.274 09:20:28 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:16.274 09:20:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:16.274 09:20:28 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:dd:00.0 00:05:16.274 09:20:28 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:16.274 09:20:28 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:16.274 09:20:28 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:05:18.807 09:20:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:dd:00.0 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:05:18.807 09:20:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:05:18.807 09:20:31 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:18.807 09:20:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.807 09:20:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:df:00.0 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:05:18.807 09:20:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.067 09:20:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:de:00.0 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:05:19.067 09:20:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.067 09:20:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:05:19.067 09:20:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.067 09:20:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:05:19.067 09:20:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.067 09:20:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:05:19.067 09:20:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.067 09:20:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:05:19.067 09:20:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.067 09:20:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:05:19.067 09:20:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.067 09:20:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:05:19.067 09:20:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.067 09:20:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:05:19.067 09:20:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.067 09:20:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:05:19.067 09:20:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.067 09:20:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:05:19.067 09:20:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.067 09:20:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:05:19.067 09:20:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.067 09:20:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:05:19.067 09:20:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.067 09:20:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:05:19.067 09:20:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.067 09:20:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:05:19.067 09:20:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.067 09:20:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:05:19.067 09:20:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.067 09:20:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:05:19.067 09:20:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.067 09:20:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:05:19.067 09:20:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.067 09:20:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:dc:00.0 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:05:19.067 09:20:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.326 09:20:32 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:19.326 09:20:32 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount ]] 00:05:19.326 09:20:32 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:05:19.326 09:20:32 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:19.326 09:20:32 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:19.326 09:20:32 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:05:19.326 09:20:32 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:dd:00.0 holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2 '' '' 00:05:19.326 09:20:32 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:dd:00.0 00:05:19.326 09:20:32 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2 00:05:19.326 09:20:32 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:19.326 09:20:32 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:05:19.326 09:20:32 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:19.326 09:20:32 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:19.326 09:20:32 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:19.326 09:20:32 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:dd:00.0 00:05:19.326 09:20:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.326 09:20:32 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:19.326 09:20:32 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:19.326 09:20:32 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:05:22.614 09:20:34 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:dd:00.0 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:05:22.614 09:20:34 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\2\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\2* ]] 00:05:22.614 09:20:34 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:22.614 09:20:34 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.614 09:20:34 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:df:00.0 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:05:22.615 09:20:34 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.615 09:20:34 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:de:00.0 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:05:22.615 09:20:34 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.615 09:20:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:05:22.615 09:20:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.615 09:20:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:05:22.615 09:20:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.615 09:20:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:05:22.615 09:20:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.615 09:20:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:05:22.615 09:20:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.615 09:20:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:05:22.615 09:20:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.615 09:20:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:05:22.615 09:20:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.615 09:20:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:05:22.615 09:20:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.615 09:20:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:05:22.615 09:20:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.615 09:20:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:05:22.615 09:20:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.615 09:20:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:05:22.615 09:20:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.615 09:20:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:05:22.615 09:20:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.615 09:20:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:05:22.615 09:20:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.615 09:20:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:05:22.615 09:20:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.615 09:20:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:05:22.615 09:20:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.615 09:20:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:05:22.615 09:20:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.615 09:20:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:05:22.615 09:20:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.615 09:20:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:dc:00.0 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:05:22.615 09:20:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.615 09:20:35 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:22.615 09:20:35 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:22.615 09:20:35 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:05:22.615 09:20:35 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:05:22.615 09:20:35 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:05:22.615 09:20:35 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:22.615 09:20:35 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:22.615 09:20:35 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:22.615 09:20:35 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:05:22.615 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:22.615 09:20:35 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:22.615 09:20:35 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:05:22.615 00:05:22.615 real 0m9.780s 00:05:22.615 user 0m2.478s 00:05:22.615 sys 0m4.260s 00:05:22.615 09:20:35 setup.sh.devices.dm_mount -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:22.615 09:20:35 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:05:22.615 ************************************ 00:05:22.615 END TEST dm_mount 00:05:22.615 ************************************ 00:05:22.615 09:20:35 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:05:22.615 09:20:35 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:05:22.615 09:20:35 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:05:22.615 09:20:35 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:22.615 09:20:35 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:22.615 09:20:35 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:22.615 09:20:35 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:23.183 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:05:23.183 /dev/nvme0n1: 8 bytes were erased at offset 0x1d1c1115e00 (gpt): 45 46 49 20 50 41 52 54 00:05:23.183 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:23.183 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:23.183 09:20:35 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:05:23.183 09:20:35 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:05:23.183 09:20:35 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:23.183 09:20:35 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:23.183 09:20:35 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:23.183 09:20:35 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:05:23.183 09:20:35 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:05:23.183 00:05:23.183 real 0m26.062s 00:05:23.183 user 0m7.653s 00:05:23.183 sys 0m12.906s 00:05:23.183 09:20:35 setup.sh.devices -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:23.183 09:20:35 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:23.183 ************************************ 00:05:23.183 END TEST devices 00:05:23.183 ************************************ 00:05:23.183 00:05:23.183 real 1m49.998s 00:05:23.183 user 0m29.018s 00:05:23.183 sys 0m49.415s 00:05:23.183 09:20:35 setup.sh -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:23.183 09:20:35 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:23.183 ************************************ 00:05:23.183 END TEST setup.sh 00:05:23.183 ************************************ 00:05:23.183 09:20:35 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh status 00:05:26.469 Hugepages 00:05:26.469 node hugesize free / total 00:05:26.469 node0 1048576kB 0 / 0 00:05:26.469 node0 2048kB 2048 / 2048 00:05:26.469 node1 1048576kB 0 / 0 00:05:26.469 node1 2048kB 0 / 0 00:05:26.469 00:05:26.469 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:26.469 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:05:26.469 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:05:26.469 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:05:26.469 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:05:26.469 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:05:26.469 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:05:26.469 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:05:26.469 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:05:26.469 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:05:26.469 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:05:26.469 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:05:26.469 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:05:26.469 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:05:26.469 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:05:26.469 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:05:26.469 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:05:26.469 NVMe 0000:dc:00.0 8086 0953 1 nvme nvme3 nvme3n1 00:05:26.469 NVMe 0000:dd:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:05:26.728 NVMe 0000:de:00.0 8086 0953 1 nvme nvme2 nvme2n1 00:05:26.728 NVMe 0000:df:00.0 8086 0a54 1 nvme nvme1 nvme1n1 00:05:26.728 09:20:39 -- spdk/autotest.sh@130 -- # uname -s 00:05:26.728 09:20:39 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:05:26.728 09:20:39 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:05:26.728 09:20:39 -- common/autotest_common.sh@1531 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:05:30.019 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:30.019 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:30.019 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:30.019 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:30.019 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:30.019 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:30.019 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:30.019 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:30.019 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:30.019 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:30.019 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:30.019 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:30.019 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:30.019 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:30.019 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:30.019 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:31.927 0000:dd:00.0 (8086 0a54): nvme -> vfio-pci 00:05:31.927 0000:df:00.0 (8086 0a54): nvme -> vfio-pci 00:05:32.189 0000:de:00.0 (8086 0953): nvme -> vfio-pci 00:05:32.189 0000:dc:00.0 (8086 0953): nvme -> vfio-pci 00:05:32.189 09:20:44 -- common/autotest_common.sh@1532 -- # sleep 1 00:05:33.126 09:20:45 -- common/autotest_common.sh@1533 -- # bdfs=() 00:05:33.126 09:20:45 -- common/autotest_common.sh@1533 -- # local bdfs 00:05:33.126 09:20:45 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:05:33.126 09:20:45 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:05:33.126 09:20:45 -- common/autotest_common.sh@1513 -- # bdfs=() 00:05:33.126 09:20:45 -- common/autotest_common.sh@1513 -- # local bdfs 00:05:33.126 09:20:45 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:33.385 09:20:45 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:33.385 09:20:45 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:05:33.385 09:20:46 -- common/autotest_common.sh@1515 -- # (( 4 == 0 )) 00:05:33.385 09:20:46 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:dc:00.0 0000:dd:00.0 0000:de:00.0 0000:df:00.0 00:05:33.385 09:20:46 -- common/autotest_common.sh@1536 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:05:36.674 Waiting for block devices as requested 00:05:36.674 0000:dd:00.0 (8086 0a54): vfio-pci -> nvme 00:05:36.674 0000:df:00.0 (8086 0a54): vfio-pci -> nvme 00:05:36.674 0000:de:00.0 (8086 0953): vfio-pci -> nvme 00:05:39.211 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:05:39.211 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:05:39.211 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:05:39.471 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:05:39.471 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:05:39.471 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:05:39.471 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:05:39.730 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:05:39.730 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:05:39.730 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:05:39.730 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:05:39.990 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:05:39.990 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:05:39.990 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:05:40.250 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:05:40.250 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:05:40.250 0000:dc:00.0 (8086 0953): vfio-pci -> nvme 00:05:43.541 09:20:55 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:05:43.541 09:20:55 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:dc:00.0 00:05:43.541 09:20:55 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:43.541 09:20:55 -- common/autotest_common.sh@1502 -- # grep 0000:dc:00.0/nvme/nvme 00:05:43.541 09:20:55 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:d7/0000:d7:02.0/0000:da:00.0/0000:db:00.0/0000:dc:00.0/nvme/nvme3 00:05:43.541 09:20:55 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:d7/0000:d7:02.0/0000:da:00.0/0000:db:00.0/0000:dc:00.0/nvme/nvme3 ]] 00:05:43.541 09:20:55 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:d7/0000:d7:02.0/0000:da:00.0/0000:db:00.0/0000:dc:00.0/nvme/nvme3 00:05:43.541 09:20:55 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme3 00:05:43.541 09:20:55 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme3 00:05:43.541 09:20:55 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme3 ]] 00:05:43.541 09:20:55 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme3 00:05:43.541 09:20:55 -- common/autotest_common.sh@1545 -- # grep oacs 00:05:43.541 09:20:55 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:05:43.541 09:20:55 -- common/autotest_common.sh@1545 -- # oacs=' 0x6' 00:05:43.541 09:20:55 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=0 00:05:43.541 09:20:55 -- common/autotest_common.sh@1548 -- # [[ 0 -ne 0 ]] 00:05:43.541 09:20:55 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:05:43.541 09:20:55 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:dd:00.0 00:05:43.541 09:20:55 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:43.541 09:20:55 -- common/autotest_common.sh@1502 -- # grep 0000:dd:00.0/nvme/nvme 00:05:43.541 09:20:55 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:d7/0000:d7:02.0/0000:da:00.0/0000:db:01.0/0000:dd:00.0/nvme/nvme0 00:05:43.541 09:20:55 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:d7/0000:d7:02.0/0000:da:00.0/0000:db:01.0/0000:dd:00.0/nvme/nvme0 ]] 00:05:43.541 09:20:55 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:d7/0000:d7:02.0/0000:da:00.0/0000:db:01.0/0000:dd:00.0/nvme/nvme0 00:05:43.541 09:20:55 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:05:43.541 09:20:55 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:05:43.541 09:20:55 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:05:43.541 09:20:55 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:05:43.541 09:20:55 -- common/autotest_common.sh@1545 -- # grep oacs 00:05:43.541 09:20:55 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:05:43.541 09:20:55 -- common/autotest_common.sh@1545 -- # oacs=' 0xe' 00:05:43.541 09:20:55 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:05:43.541 09:20:55 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:05:43.541 09:20:55 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:05:43.541 09:20:55 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:05:43.541 09:20:55 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:05:43.541 09:20:55 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:05:43.541 09:20:55 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:05:43.541 09:20:55 -- common/autotest_common.sh@1557 -- # continue 00:05:43.541 09:20:55 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:05:43.541 09:20:55 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:de:00.0 00:05:43.541 09:20:55 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:43.541 09:20:55 -- common/autotest_common.sh@1502 -- # grep 0000:de:00.0/nvme/nvme 00:05:43.541 09:20:55 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:d7/0000:d7:02.0/0000:da:00.0/0000:db:02.0/0000:de:00.0/nvme/nvme2 00:05:43.541 09:20:55 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:d7/0000:d7:02.0/0000:da:00.0/0000:db:02.0/0000:de:00.0/nvme/nvme2 ]] 00:05:43.541 09:20:55 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:d7/0000:d7:02.0/0000:da:00.0/0000:db:02.0/0000:de:00.0/nvme/nvme2 00:05:43.541 09:20:55 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme2 00:05:43.541 09:20:55 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme2 00:05:43.541 09:20:55 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme2 ]] 00:05:43.541 09:20:55 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme2 00:05:43.541 09:20:55 -- common/autotest_common.sh@1545 -- # grep oacs 00:05:43.541 09:20:55 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:05:43.541 09:20:55 -- common/autotest_common.sh@1545 -- # oacs=' 0x6' 00:05:43.541 09:20:55 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=0 00:05:43.541 09:20:55 -- common/autotest_common.sh@1548 -- # [[ 0 -ne 0 ]] 00:05:43.541 09:20:55 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:05:43.541 09:20:55 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:df:00.0 00:05:43.541 09:20:55 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:43.541 09:20:55 -- common/autotest_common.sh@1502 -- # grep 0000:df:00.0/nvme/nvme 00:05:43.541 09:20:55 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:d7/0000:d7:02.0/0000:da:00.0/0000:db:03.0/0000:df:00.0/nvme/nvme1 00:05:43.541 09:20:55 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:d7/0000:d7:02.0/0000:da:00.0/0000:db:03.0/0000:df:00.0/nvme/nvme1 ]] 00:05:43.541 09:20:55 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:d7/0000:d7:02.0/0000:da:00.0/0000:db:03.0/0000:df:00.0/nvme/nvme1 00:05:43.541 09:20:55 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme1 00:05:43.541 09:20:55 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme1 00:05:43.541 09:20:55 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme1 ]] 00:05:43.542 09:20:55 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme1 00:05:43.542 09:20:55 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:05:43.542 09:20:55 -- common/autotest_common.sh@1545 -- # grep oacs 00:05:43.542 09:20:55 -- common/autotest_common.sh@1545 -- # oacs=' 0xe' 00:05:43.542 09:20:55 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:05:43.542 09:20:55 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:05:43.542 09:20:55 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme1 00:05:43.542 09:20:55 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:05:43.542 09:20:55 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:05:43.542 09:20:55 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:05:43.542 09:20:55 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:05:43.542 09:20:55 -- common/autotest_common.sh@1557 -- # continue 00:05:43.542 09:20:55 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:05:43.542 09:20:55 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:43.542 09:20:55 -- common/autotest_common.sh@10 -- # set +x 00:05:43.542 09:20:55 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:05:43.542 09:20:55 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:43.542 09:20:55 -- common/autotest_common.sh@10 -- # set +x 00:05:43.542 09:20:55 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:05:46.834 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:46.834 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:46.834 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:46.834 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:46.834 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:46.834 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:46.834 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:46.834 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:46.834 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:46.834 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:46.834 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:46.834 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:46.834 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:46.834 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:46.834 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:46.834 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:48.212 0000:df:00.0 (8086 0a54): nvme -> vfio-pci 00:05:48.212 0000:dd:00.0 (8086 0a54): nvme -> vfio-pci 00:05:48.471 0000:de:00.0 (8086 0953): nvme -> vfio-pci 00:05:48.471 0000:dc:00.0 (8086 0953): nvme -> vfio-pci 00:05:48.730 09:21:01 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:05:48.730 09:21:01 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:48.730 09:21:01 -- common/autotest_common.sh@10 -- # set +x 00:05:48.730 09:21:01 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:05:48.730 09:21:01 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:05:48.730 09:21:01 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:05:48.730 09:21:01 -- common/autotest_common.sh@1577 -- # bdfs=() 00:05:48.730 09:21:01 -- common/autotest_common.sh@1577 -- # local bdfs 00:05:48.730 09:21:01 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:05:48.730 09:21:01 -- common/autotest_common.sh@1513 -- # bdfs=() 00:05:48.730 09:21:01 -- common/autotest_common.sh@1513 -- # local bdfs 00:05:48.730 09:21:01 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:48.730 09:21:01 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:48.730 09:21:01 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:05:48.730 09:21:01 -- common/autotest_common.sh@1515 -- # (( 4 == 0 )) 00:05:48.730 09:21:01 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:dc:00.0 0000:dd:00.0 0000:de:00.0 0000:df:00.0 00:05:48.730 09:21:01 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:05:48.730 09:21:01 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:dc:00.0/device 00:05:48.730 09:21:01 -- common/autotest_common.sh@1580 -- # device=0x0953 00:05:48.730 09:21:01 -- common/autotest_common.sh@1581 -- # [[ 0x0953 == \0\x\0\a\5\4 ]] 00:05:48.730 09:21:01 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:05:48.730 09:21:01 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:dd:00.0/device 00:05:48.730 09:21:01 -- common/autotest_common.sh@1580 -- # device=0x0a54 00:05:48.730 09:21:01 -- common/autotest_common.sh@1581 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:05:48.730 09:21:01 -- common/autotest_common.sh@1582 -- # bdfs+=($bdf) 00:05:48.730 09:21:01 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:05:48.730 09:21:01 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:de:00.0/device 00:05:48.730 09:21:01 -- common/autotest_common.sh@1580 -- # device=0x0953 00:05:48.730 09:21:01 -- common/autotest_common.sh@1581 -- # [[ 0x0953 == \0\x\0\a\5\4 ]] 00:05:48.730 09:21:01 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:05:48.730 09:21:01 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:df:00.0/device 00:05:48.730 09:21:01 -- common/autotest_common.sh@1580 -- # device=0x0a54 00:05:48.730 09:21:01 -- common/autotest_common.sh@1581 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:05:48.730 09:21:01 -- common/autotest_common.sh@1582 -- # bdfs+=($bdf) 00:05:48.730 09:21:01 -- common/autotest_common.sh@1586 -- # printf '%s\n' 0000:dd:00.0 0000:df:00.0 00:05:48.730 09:21:01 -- common/autotest_common.sh@1592 -- # [[ -z 0000:dd:00.0 ]] 00:05:48.730 09:21:01 -- common/autotest_common.sh@1597 -- # spdk_tgt_pid=399115 00:05:48.730 09:21:01 -- common/autotest_common.sh@1596 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:48.730 09:21:01 -- common/autotest_common.sh@1598 -- # waitforlisten 399115 00:05:48.730 09:21:01 -- common/autotest_common.sh@831 -- # '[' -z 399115 ']' 00:05:48.730 09:21:01 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:48.730 09:21:01 -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:48.730 09:21:01 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:48.730 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:48.730 09:21:01 -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:48.730 09:21:01 -- common/autotest_common.sh@10 -- # set +x 00:05:48.730 [2024-07-25 09:21:01.468606] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:05:48.731 [2024-07-25 09:21:01.468681] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid399115 ] 00:05:48.731 EAL: No free 2048 kB hugepages reported on node 1 00:05:48.731 [2024-07-25 09:21:01.526455] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.990 [2024-07-25 09:21:01.607863] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.558 09:21:02 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:49.558 09:21:02 -- common/autotest_common.sh@864 -- # return 0 00:05:49.558 09:21:02 -- common/autotest_common.sh@1600 -- # bdf_id=0 00:05:49.558 09:21:02 -- common/autotest_common.sh@1601 -- # for bdf in "${bdfs[@]}" 00:05:49.558 09:21:02 -- common/autotest_common.sh@1602 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:dd:00.0 00:05:52.847 nvme0n1 00:05:52.847 09:21:05 -- common/autotest_common.sh@1604 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:05:52.847 [2024-07-25 09:21:05.442654] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:05:52.847 request: 00:05:52.847 { 00:05:52.847 "nvme_ctrlr_name": "nvme0", 00:05:52.847 "password": "test", 00:05:52.847 "method": "bdev_nvme_opal_revert", 00:05:52.847 "req_id": 1 00:05:52.847 } 00:05:52.847 Got JSON-RPC error response 00:05:52.847 response: 00:05:52.847 { 00:05:52.847 "code": -32602, 00:05:52.847 "message": "Invalid parameters" 00:05:52.847 } 00:05:52.847 09:21:05 -- common/autotest_common.sh@1604 -- # true 00:05:52.847 09:21:05 -- common/autotest_common.sh@1605 -- # (( ++bdf_id )) 00:05:52.847 09:21:05 -- common/autotest_common.sh@1601 -- # for bdf in "${bdfs[@]}" 00:05:52.847 09:21:05 -- common/autotest_common.sh@1602 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme1 -t pcie -a 0000:df:00.0 00:05:56.138 nvme1n1 00:05:56.138 09:21:08 -- common/autotest_common.sh@1604 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme1 -p test 00:05:56.138 [2024-07-25 09:21:08.625675] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme1 not support opal 00:05:56.138 request: 00:05:56.138 { 00:05:56.138 "nvme_ctrlr_name": "nvme1", 00:05:56.138 "password": "test", 00:05:56.138 "method": "bdev_nvme_opal_revert", 00:05:56.138 "req_id": 1 00:05:56.138 } 00:05:56.138 Got JSON-RPC error response 00:05:56.138 response: 00:05:56.138 { 00:05:56.138 "code": -32602, 00:05:56.138 "message": "Invalid parameters" 00:05:56.138 } 00:05:56.138 09:21:08 -- common/autotest_common.sh@1604 -- # true 00:05:56.138 09:21:08 -- common/autotest_common.sh@1605 -- # (( ++bdf_id )) 00:05:56.138 09:21:08 -- common/autotest_common.sh@1608 -- # killprocess 399115 00:05:56.138 09:21:08 -- common/autotest_common.sh@950 -- # '[' -z 399115 ']' 00:05:56.138 09:21:08 -- common/autotest_common.sh@954 -- # kill -0 399115 00:05:56.138 09:21:08 -- common/autotest_common.sh@955 -- # uname 00:05:56.138 09:21:08 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:56.138 09:21:08 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 399115 00:05:56.138 09:21:08 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:56.138 09:21:08 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:56.138 09:21:08 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 399115' 00:05:56.138 killing process with pid 399115 00:05:56.138 09:21:08 -- common/autotest_common.sh@969 -- # kill 399115 00:05:56.138 09:21:08 -- common/autotest_common.sh@974 -- # wait 399115 00:05:59.426 09:21:11 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:05:59.426 09:21:11 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:05:59.426 09:21:11 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:59.426 09:21:11 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:59.426 09:21:11 -- spdk/autotest.sh@162 -- # timing_enter lib 00:05:59.426 09:21:11 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:59.426 09:21:11 -- common/autotest_common.sh@10 -- # set +x 00:05:59.426 09:21:11 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:05:59.426 09:21:11 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env.sh 00:05:59.426 09:21:11 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:59.426 09:21:11 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:59.426 09:21:11 -- common/autotest_common.sh@10 -- # set +x 00:05:59.426 ************************************ 00:05:59.426 START TEST env 00:05:59.426 ************************************ 00:05:59.426 09:21:11 env -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env.sh 00:05:59.426 * Looking for test storage... 00:05:59.426 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env 00:05:59.426 09:21:11 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/memory/memory_ut 00:05:59.426 09:21:11 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:59.426 09:21:11 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:59.426 09:21:11 env -- common/autotest_common.sh@10 -- # set +x 00:05:59.426 ************************************ 00:05:59.426 START TEST env_memory 00:05:59.426 ************************************ 00:05:59.426 09:21:11 env.env_memory -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/memory/memory_ut 00:05:59.426 00:05:59.426 00:05:59.426 CUnit - A unit testing framework for C - Version 2.1-3 00:05:59.426 http://cunit.sourceforge.net/ 00:05:59.426 00:05:59.426 00:05:59.426 Suite: memory 00:05:59.426 Test: alloc and free memory map ...[2024-07-25 09:21:11.781970] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:59.426 passed 00:05:59.426 Test: mem map translation ...[2024-07-25 09:21:11.796023] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 591:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:59.426 [2024-07-25 09:21:11.796038] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 591:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:59.426 [2024-07-25 09:21:11.796075] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:59.426 [2024-07-25 09:21:11.796082] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:59.426 passed 00:05:59.426 Test: mem map registration ...[2024-07-25 09:21:11.819561] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:59.426 [2024-07-25 09:21:11.819577] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:59.426 passed 00:05:59.426 Test: mem map adjacent registrations ...passed 00:05:59.426 00:05:59.426 Run Summary: Type Total Ran Passed Failed Inactive 00:05:59.426 suites 1 1 n/a 0 0 00:05:59.426 tests 4 4 4 0 0 00:05:59.426 asserts 152 152 152 0 n/a 00:05:59.426 00:05:59.426 Elapsed time = 0.093 seconds 00:05:59.426 00:05:59.426 real 0m0.104s 00:05:59.426 user 0m0.094s 00:05:59.426 sys 0m0.009s 00:05:59.426 09:21:11 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:59.426 09:21:11 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:59.426 ************************************ 00:05:59.426 END TEST env_memory 00:05:59.426 ************************************ 00:05:59.426 09:21:11 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:59.426 09:21:11 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:59.426 09:21:11 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:59.426 09:21:11 env -- common/autotest_common.sh@10 -- # set +x 00:05:59.426 ************************************ 00:05:59.426 START TEST env_vtophys 00:05:59.426 ************************************ 00:05:59.426 09:21:11 env.env_vtophys -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:59.426 EAL: lib.eal log level changed from notice to debug 00:05:59.426 EAL: Detected lcore 0 as core 0 on socket 0 00:05:59.426 EAL: Detected lcore 1 as core 1 on socket 0 00:05:59.426 EAL: Detected lcore 2 as core 2 on socket 0 00:05:59.426 EAL: Detected lcore 3 as core 3 on socket 0 00:05:59.426 EAL: Detected lcore 4 as core 4 on socket 0 00:05:59.426 EAL: Detected lcore 5 as core 5 on socket 0 00:05:59.426 EAL: Detected lcore 6 as core 8 on socket 0 00:05:59.426 EAL: Detected lcore 7 as core 9 on socket 0 00:05:59.426 EAL: Detected lcore 8 as core 10 on socket 0 00:05:59.426 EAL: Detected lcore 9 as core 11 on socket 0 00:05:59.426 EAL: Detected lcore 10 as core 12 on socket 0 00:05:59.426 EAL: Detected lcore 11 as core 16 on socket 0 00:05:59.426 EAL: Detected lcore 12 as core 17 on socket 0 00:05:59.426 EAL: Detected lcore 13 as core 18 on socket 0 00:05:59.426 EAL: Detected lcore 14 as core 19 on socket 0 00:05:59.426 EAL: Detected lcore 15 as core 20 on socket 0 00:05:59.426 EAL: Detected lcore 16 as core 21 on socket 0 00:05:59.426 EAL: Detected lcore 17 as core 24 on socket 0 00:05:59.426 EAL: Detected lcore 18 as core 25 on socket 0 00:05:59.426 EAL: Detected lcore 19 as core 26 on socket 0 00:05:59.426 EAL: Detected lcore 20 as core 27 on socket 0 00:05:59.426 EAL: Detected lcore 21 as core 28 on socket 0 00:05:59.426 EAL: Detected lcore 22 as core 0 on socket 1 00:05:59.426 EAL: Detected lcore 23 as core 1 on socket 1 00:05:59.426 EAL: Detected lcore 24 as core 2 on socket 1 00:05:59.426 EAL: Detected lcore 25 as core 3 on socket 1 00:05:59.426 EAL: Detected lcore 26 as core 4 on socket 1 00:05:59.426 EAL: Detected lcore 27 as core 5 on socket 1 00:05:59.426 EAL: Detected lcore 28 as core 8 on socket 1 00:05:59.426 EAL: Detected lcore 29 as core 9 on socket 1 00:05:59.426 EAL: Detected lcore 30 as core 10 on socket 1 00:05:59.426 EAL: Detected lcore 31 as core 11 on socket 1 00:05:59.426 EAL: Detected lcore 32 as core 12 on socket 1 00:05:59.426 EAL: Detected lcore 33 as core 16 on socket 1 00:05:59.426 EAL: Detected lcore 34 as core 17 on socket 1 00:05:59.426 EAL: Detected lcore 35 as core 18 on socket 1 00:05:59.426 EAL: Detected lcore 36 as core 19 on socket 1 00:05:59.426 EAL: Detected lcore 37 as core 20 on socket 1 00:05:59.426 EAL: Detected lcore 38 as core 21 on socket 1 00:05:59.426 EAL: Detected lcore 39 as core 24 on socket 1 00:05:59.426 EAL: Detected lcore 40 as core 25 on socket 1 00:05:59.426 EAL: Detected lcore 41 as core 26 on socket 1 00:05:59.426 EAL: Detected lcore 42 as core 27 on socket 1 00:05:59.426 EAL: Detected lcore 43 as core 28 on socket 1 00:05:59.426 EAL: Detected lcore 44 as core 0 on socket 0 00:05:59.426 EAL: Detected lcore 45 as core 1 on socket 0 00:05:59.426 EAL: Detected lcore 46 as core 2 on socket 0 00:05:59.426 EAL: Detected lcore 47 as core 3 on socket 0 00:05:59.426 EAL: Detected lcore 48 as core 4 on socket 0 00:05:59.426 EAL: Detected lcore 49 as core 5 on socket 0 00:05:59.426 EAL: Detected lcore 50 as core 8 on socket 0 00:05:59.426 EAL: Detected lcore 51 as core 9 on socket 0 00:05:59.426 EAL: Detected lcore 52 as core 10 on socket 0 00:05:59.426 EAL: Detected lcore 53 as core 11 on socket 0 00:05:59.426 EAL: Detected lcore 54 as core 12 on socket 0 00:05:59.426 EAL: Detected lcore 55 as core 16 on socket 0 00:05:59.426 EAL: Detected lcore 56 as core 17 on socket 0 00:05:59.426 EAL: Detected lcore 57 as core 18 on socket 0 00:05:59.426 EAL: Detected lcore 58 as core 19 on socket 0 00:05:59.426 EAL: Detected lcore 59 as core 20 on socket 0 00:05:59.426 EAL: Detected lcore 60 as core 21 on socket 0 00:05:59.426 EAL: Detected lcore 61 as core 24 on socket 0 00:05:59.426 EAL: Detected lcore 62 as core 25 on socket 0 00:05:59.426 EAL: Detected lcore 63 as core 26 on socket 0 00:05:59.426 EAL: Detected lcore 64 as core 27 on socket 0 00:05:59.426 EAL: Detected lcore 65 as core 28 on socket 0 00:05:59.426 EAL: Detected lcore 66 as core 0 on socket 1 00:05:59.426 EAL: Detected lcore 67 as core 1 on socket 1 00:05:59.427 EAL: Detected lcore 68 as core 2 on socket 1 00:05:59.427 EAL: Detected lcore 69 as core 3 on socket 1 00:05:59.427 EAL: Detected lcore 70 as core 4 on socket 1 00:05:59.427 EAL: Detected lcore 71 as core 5 on socket 1 00:05:59.427 EAL: Detected lcore 72 as core 8 on socket 1 00:05:59.427 EAL: Detected lcore 73 as core 9 on socket 1 00:05:59.427 EAL: Detected lcore 74 as core 10 on socket 1 00:05:59.427 EAL: Detected lcore 75 as core 11 on socket 1 00:05:59.427 EAL: Detected lcore 76 as core 12 on socket 1 00:05:59.427 EAL: Detected lcore 77 as core 16 on socket 1 00:05:59.427 EAL: Detected lcore 78 as core 17 on socket 1 00:05:59.427 EAL: Detected lcore 79 as core 18 on socket 1 00:05:59.427 EAL: Detected lcore 80 as core 19 on socket 1 00:05:59.427 EAL: Detected lcore 81 as core 20 on socket 1 00:05:59.427 EAL: Detected lcore 82 as core 21 on socket 1 00:05:59.427 EAL: Detected lcore 83 as core 24 on socket 1 00:05:59.427 EAL: Detected lcore 84 as core 25 on socket 1 00:05:59.427 EAL: Detected lcore 85 as core 26 on socket 1 00:05:59.427 EAL: Detected lcore 86 as core 27 on socket 1 00:05:59.427 EAL: Detected lcore 87 as core 28 on socket 1 00:05:59.427 EAL: Maximum logical cores by configuration: 128 00:05:59.427 EAL: Detected CPU lcores: 88 00:05:59.427 EAL: Detected NUMA nodes: 2 00:05:59.427 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:05:59.427 EAL: Checking presence of .so 'librte_eal.so.24' 00:05:59.427 EAL: Checking presence of .so 'librte_eal.so' 00:05:59.427 EAL: Detected static linkage of DPDK 00:05:59.427 EAL: No shared files mode enabled, IPC will be disabled 00:05:59.427 EAL: Bus pci wants IOVA as 'DC' 00:05:59.427 EAL: Buses did not request a specific IOVA mode. 00:05:59.427 EAL: IOMMU is available, selecting IOVA as VA mode. 00:05:59.427 EAL: Selected IOVA mode 'VA' 00:05:59.427 EAL: No free 2048 kB hugepages reported on node 1 00:05:59.427 EAL: Probing VFIO support... 00:05:59.427 EAL: IOMMU type 1 (Type 1) is supported 00:05:59.427 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:59.427 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:59.427 EAL: VFIO support initialized 00:05:59.427 EAL: Ask a virtual area of 0x2e000 bytes 00:05:59.427 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:59.427 EAL: Setting up physically contiguous memory... 00:05:59.427 EAL: Setting maximum number of open files to 524288 00:05:59.427 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:59.427 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:59.427 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:59.427 EAL: Ask a virtual area of 0x61000 bytes 00:05:59.427 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:59.427 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:59.427 EAL: Ask a virtual area of 0x400000000 bytes 00:05:59.427 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:59.427 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:59.427 EAL: Ask a virtual area of 0x61000 bytes 00:05:59.427 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:59.427 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:59.427 EAL: Ask a virtual area of 0x400000000 bytes 00:05:59.427 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:59.427 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:59.427 EAL: Ask a virtual area of 0x61000 bytes 00:05:59.427 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:59.427 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:59.427 EAL: Ask a virtual area of 0x400000000 bytes 00:05:59.427 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:59.427 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:59.427 EAL: Ask a virtual area of 0x61000 bytes 00:05:59.427 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:59.427 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:59.427 EAL: Ask a virtual area of 0x400000000 bytes 00:05:59.427 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:59.427 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:59.427 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:59.427 EAL: Ask a virtual area of 0x61000 bytes 00:05:59.427 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:59.427 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:59.427 EAL: Ask a virtual area of 0x400000000 bytes 00:05:59.427 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:59.427 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:59.427 EAL: Ask a virtual area of 0x61000 bytes 00:05:59.427 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:59.427 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:59.427 EAL: Ask a virtual area of 0x400000000 bytes 00:05:59.427 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:59.427 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:59.427 EAL: Ask a virtual area of 0x61000 bytes 00:05:59.427 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:59.427 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:59.427 EAL: Ask a virtual area of 0x400000000 bytes 00:05:59.427 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:59.427 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:59.427 EAL: Ask a virtual area of 0x61000 bytes 00:05:59.427 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:59.427 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:59.427 EAL: Ask a virtual area of 0x400000000 bytes 00:05:59.427 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:59.427 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:59.427 EAL: Hugepages will be freed exactly as allocated. 00:05:59.427 EAL: No shared files mode enabled, IPC is disabled 00:05:59.427 EAL: No shared files mode enabled, IPC is disabled 00:05:59.427 EAL: TSC frequency is ~2100000 KHz 00:05:59.427 EAL: Main lcore 0 is ready (tid=7fa9bd321a00;cpuset=[0]) 00:05:59.427 EAL: Trying to obtain current memory policy. 00:05:59.427 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:59.427 EAL: Restoring previous memory policy: 0 00:05:59.427 EAL: request: mp_malloc_sync 00:05:59.427 EAL: No shared files mode enabled, IPC is disabled 00:05:59.427 EAL: Heap on socket 0 was expanded by 2MB 00:05:59.427 EAL: No shared files mode enabled, IPC is disabled 00:05:59.427 EAL: Mem event callback 'spdk:(nil)' registered 00:05:59.427 00:05:59.427 00:05:59.427 CUnit - A unit testing framework for C - Version 2.1-3 00:05:59.427 http://cunit.sourceforge.net/ 00:05:59.427 00:05:59.427 00:05:59.427 Suite: components_suite 00:05:59.427 Test: vtophys_malloc_test ...passed 00:05:59.427 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:59.427 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:59.427 EAL: Restoring previous memory policy: 4 00:05:59.427 EAL: Calling mem event callback 'spdk:(nil)' 00:05:59.427 EAL: request: mp_malloc_sync 00:05:59.427 EAL: No shared files mode enabled, IPC is disabled 00:05:59.427 EAL: Heap on socket 0 was expanded by 4MB 00:05:59.427 EAL: Calling mem event callback 'spdk:(nil)' 00:05:59.427 EAL: request: mp_malloc_sync 00:05:59.427 EAL: No shared files mode enabled, IPC is disabled 00:05:59.427 EAL: Heap on socket 0 was shrunk by 4MB 00:05:59.427 EAL: Trying to obtain current memory policy. 00:05:59.427 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:59.427 EAL: Restoring previous memory policy: 4 00:05:59.427 EAL: Calling mem event callback 'spdk:(nil)' 00:05:59.427 EAL: request: mp_malloc_sync 00:05:59.427 EAL: No shared files mode enabled, IPC is disabled 00:05:59.427 EAL: Heap on socket 0 was expanded by 6MB 00:05:59.427 EAL: Calling mem event callback 'spdk:(nil)' 00:05:59.427 EAL: request: mp_malloc_sync 00:05:59.427 EAL: No shared files mode enabled, IPC is disabled 00:05:59.427 EAL: Heap on socket 0 was shrunk by 6MB 00:05:59.427 EAL: Trying to obtain current memory policy. 00:05:59.427 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:59.427 EAL: Restoring previous memory policy: 4 00:05:59.427 EAL: Calling mem event callback 'spdk:(nil)' 00:05:59.427 EAL: request: mp_malloc_sync 00:05:59.427 EAL: No shared files mode enabled, IPC is disabled 00:05:59.427 EAL: Heap on socket 0 was expanded by 10MB 00:05:59.427 EAL: Calling mem event callback 'spdk:(nil)' 00:05:59.427 EAL: request: mp_malloc_sync 00:05:59.427 EAL: No shared files mode enabled, IPC is disabled 00:05:59.427 EAL: Heap on socket 0 was shrunk by 10MB 00:05:59.427 EAL: Trying to obtain current memory policy. 00:05:59.427 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:59.427 EAL: Restoring previous memory policy: 4 00:05:59.427 EAL: Calling mem event callback 'spdk:(nil)' 00:05:59.427 EAL: request: mp_malloc_sync 00:05:59.427 EAL: No shared files mode enabled, IPC is disabled 00:05:59.427 EAL: Heap on socket 0 was expanded by 18MB 00:05:59.427 EAL: Calling mem event callback 'spdk:(nil)' 00:05:59.427 EAL: request: mp_malloc_sync 00:05:59.427 EAL: No shared files mode enabled, IPC is disabled 00:05:59.427 EAL: Heap on socket 0 was shrunk by 18MB 00:05:59.427 EAL: Trying to obtain current memory policy. 00:05:59.427 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:59.427 EAL: Restoring previous memory policy: 4 00:05:59.427 EAL: Calling mem event callback 'spdk:(nil)' 00:05:59.427 EAL: request: mp_malloc_sync 00:05:59.427 EAL: No shared files mode enabled, IPC is disabled 00:05:59.427 EAL: Heap on socket 0 was expanded by 34MB 00:05:59.427 EAL: Calling mem event callback 'spdk:(nil)' 00:05:59.427 EAL: request: mp_malloc_sync 00:05:59.427 EAL: No shared files mode enabled, IPC is disabled 00:05:59.427 EAL: Heap on socket 0 was shrunk by 34MB 00:05:59.427 EAL: Trying to obtain current memory policy. 00:05:59.427 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:59.427 EAL: Restoring previous memory policy: 4 00:05:59.427 EAL: Calling mem event callback 'spdk:(nil)' 00:05:59.427 EAL: request: mp_malloc_sync 00:05:59.427 EAL: No shared files mode enabled, IPC is disabled 00:05:59.427 EAL: Heap on socket 0 was expanded by 66MB 00:05:59.427 EAL: Calling mem event callback 'spdk:(nil)' 00:05:59.427 EAL: request: mp_malloc_sync 00:05:59.427 EAL: No shared files mode enabled, IPC is disabled 00:05:59.427 EAL: Heap on socket 0 was shrunk by 66MB 00:05:59.428 EAL: Trying to obtain current memory policy. 00:05:59.428 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:59.428 EAL: Restoring previous memory policy: 4 00:05:59.428 EAL: Calling mem event callback 'spdk:(nil)' 00:05:59.428 EAL: request: mp_malloc_sync 00:05:59.428 EAL: No shared files mode enabled, IPC is disabled 00:05:59.428 EAL: Heap on socket 0 was expanded by 130MB 00:05:59.428 EAL: Calling mem event callback 'spdk:(nil)' 00:05:59.428 EAL: request: mp_malloc_sync 00:05:59.428 EAL: No shared files mode enabled, IPC is disabled 00:05:59.428 EAL: Heap on socket 0 was shrunk by 130MB 00:05:59.428 EAL: Trying to obtain current memory policy. 00:05:59.428 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:59.428 EAL: Restoring previous memory policy: 4 00:05:59.428 EAL: Calling mem event callback 'spdk:(nil)' 00:05:59.428 EAL: request: mp_malloc_sync 00:05:59.428 EAL: No shared files mode enabled, IPC is disabled 00:05:59.428 EAL: Heap on socket 0 was expanded by 258MB 00:05:59.428 EAL: Calling mem event callback 'spdk:(nil)' 00:05:59.428 EAL: request: mp_malloc_sync 00:05:59.428 EAL: No shared files mode enabled, IPC is disabled 00:05:59.428 EAL: Heap on socket 0 was shrunk by 258MB 00:05:59.428 EAL: Trying to obtain current memory policy. 00:05:59.428 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:59.687 EAL: Restoring previous memory policy: 4 00:05:59.687 EAL: Calling mem event callback 'spdk:(nil)' 00:05:59.687 EAL: request: mp_malloc_sync 00:05:59.687 EAL: No shared files mode enabled, IPC is disabled 00:05:59.687 EAL: Heap on socket 0 was expanded by 514MB 00:05:59.687 EAL: Calling mem event callback 'spdk:(nil)' 00:05:59.687 EAL: request: mp_malloc_sync 00:05:59.687 EAL: No shared files mode enabled, IPC is disabled 00:05:59.687 EAL: Heap on socket 0 was shrunk by 514MB 00:05:59.687 EAL: Trying to obtain current memory policy. 00:05:59.687 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:59.946 EAL: Restoring previous memory policy: 4 00:05:59.946 EAL: Calling mem event callback 'spdk:(nil)' 00:05:59.946 EAL: request: mp_malloc_sync 00:05:59.946 EAL: No shared files mode enabled, IPC is disabled 00:05:59.946 EAL: Heap on socket 0 was expanded by 1026MB 00:06:00.205 EAL: Calling mem event callback 'spdk:(nil)' 00:06:00.205 EAL: request: mp_malloc_sync 00:06:00.205 EAL: No shared files mode enabled, IPC is disabled 00:06:00.205 EAL: Heap on socket 0 was shrunk by 1026MB 00:06:00.205 passed 00:06:00.205 00:06:00.205 Run Summary: Type Total Ran Passed Failed Inactive 00:06:00.205 suites 1 1 n/a 0 0 00:06:00.205 tests 2 2 2 0 0 00:06:00.205 asserts 497 497 497 0 n/a 00:06:00.205 00:06:00.205 Elapsed time = 0.950 seconds 00:06:00.205 EAL: Calling mem event callback 'spdk:(nil)' 00:06:00.205 EAL: request: mp_malloc_sync 00:06:00.205 EAL: No shared files mode enabled, IPC is disabled 00:06:00.205 EAL: Heap on socket 0 was shrunk by 2MB 00:06:00.205 EAL: No shared files mode enabled, IPC is disabled 00:06:00.205 EAL: No shared files mode enabled, IPC is disabled 00:06:00.205 EAL: No shared files mode enabled, IPC is disabled 00:06:00.205 00:06:00.205 real 0m1.055s 00:06:00.205 user 0m0.619s 00:06:00.205 sys 0m0.412s 00:06:00.205 09:21:12 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:00.205 09:21:12 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:06:00.205 ************************************ 00:06:00.205 END TEST env_vtophys 00:06:00.205 ************************************ 00:06:00.205 09:21:12 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/pci/pci_ut 00:06:00.205 09:21:12 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:00.205 09:21:12 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:00.205 09:21:12 env -- common/autotest_common.sh@10 -- # set +x 00:06:00.465 ************************************ 00:06:00.465 START TEST env_pci 00:06:00.465 ************************************ 00:06:00.465 09:21:13 env.env_pci -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/pci/pci_ut 00:06:00.465 00:06:00.465 00:06:00.465 CUnit - A unit testing framework for C - Version 2.1-3 00:06:00.465 http://cunit.sourceforge.net/ 00:06:00.465 00:06:00.465 00:06:00.465 Suite: pci 00:06:00.465 Test: pci_hook ...[2024-07-25 09:21:13.038383] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/pci.c:1041:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 401576 has claimed it 00:06:00.465 EAL: Cannot find device (10000:00:01.0) 00:06:00.465 EAL: Failed to attach device on primary process 00:06:00.465 passed 00:06:00.465 00:06:00.465 Run Summary: Type Total Ran Passed Failed Inactive 00:06:00.465 suites 1 1 n/a 0 0 00:06:00.465 tests 1 1 1 0 0 00:06:00.465 asserts 25 25 25 0 n/a 00:06:00.465 00:06:00.465 Elapsed time = 0.021 seconds 00:06:00.465 00:06:00.465 real 0m0.032s 00:06:00.465 user 0m0.006s 00:06:00.465 sys 0m0.026s 00:06:00.465 09:21:13 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:00.465 09:21:13 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:06:00.465 ************************************ 00:06:00.465 END TEST env_pci 00:06:00.465 ************************************ 00:06:00.465 09:21:13 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:06:00.465 09:21:13 env -- env/env.sh@15 -- # uname 00:06:00.465 09:21:13 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:06:00.465 09:21:13 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:06:00.465 09:21:13 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:00.465 09:21:13 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:06:00.465 09:21:13 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:00.465 09:21:13 env -- common/autotest_common.sh@10 -- # set +x 00:06:00.465 ************************************ 00:06:00.465 START TEST env_dpdk_post_init 00:06:00.465 ************************************ 00:06:00.465 09:21:13 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:00.465 EAL: Detected CPU lcores: 88 00:06:00.465 EAL: Detected NUMA nodes: 2 00:06:00.465 EAL: Detected static linkage of DPDK 00:06:00.465 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:00.465 EAL: Selected IOVA mode 'VA' 00:06:00.465 EAL: No free 2048 kB hugepages reported on node 1 00:06:00.465 EAL: VFIO support initialized 00:06:00.465 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:00.465 EAL: Using IOMMU type 1 (Type 1) 00:06:01.740 EAL: Probe PCI driver: spdk_nvme (8086:0953) device: 0000:dc:00.0 (socket 1) 00:06:02.677 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:dd:00.0 (socket 1) 00:06:03.553 EAL: Probe PCI driver: spdk_nvme (8086:0953) device: 0000:de:00.0 (socket 1) 00:06:04.491 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:df:00.0 (socket 1) 00:06:08.695 EAL: Releasing PCI mapped resource for 0000:dd:00.0 00:06:08.695 EAL: Calling pci_unmap_resource for 0000:dd:00.0 at 0x202001004000 00:06:08.695 EAL: Releasing PCI mapped resource for 0000:df:00.0 00:06:08.695 EAL: Calling pci_unmap_resource for 0000:df:00.0 at 0x20200100c000 00:06:08.953 EAL: Releasing PCI mapped resource for 0000:dc:00.0 00:06:08.953 EAL: Calling pci_unmap_resource for 0000:dc:00.0 at 0x202001000000 00:06:09.521 EAL: Releasing PCI mapped resource for 0000:de:00.0 00:06:09.521 EAL: Calling pci_unmap_resource for 0000:de:00.0 at 0x202001008000 00:06:09.781 Starting DPDK initialization... 00:06:09.781 Starting SPDK post initialization... 00:06:09.781 SPDK NVMe probe 00:06:09.781 Attaching to 0000:dc:00.0 00:06:09.781 Attaching to 0000:dd:00.0 00:06:09.781 Attaching to 0000:de:00.0 00:06:09.781 Attaching to 0000:df:00.0 00:06:09.781 Attached to 0000:de:00.0 00:06:09.781 Attached to 0000:dc:00.0 00:06:09.781 Attached to 0000:dd:00.0 00:06:09.781 Attached to 0000:df:00.0 00:06:09.781 Cleaning up... 00:06:09.781 00:06:09.781 real 0m9.262s 00:06:09.781 user 0m6.017s 00:06:09.781 sys 0m0.285s 00:06:09.781 09:21:22 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:09.781 09:21:22 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:06:09.781 ************************************ 00:06:09.781 END TEST env_dpdk_post_init 00:06:09.781 ************************************ 00:06:09.781 09:21:22 env -- env/env.sh@26 -- # uname 00:06:09.781 09:21:22 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:06:09.781 09:21:22 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:06:09.781 09:21:22 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:09.781 09:21:22 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:09.781 09:21:22 env -- common/autotest_common.sh@10 -- # set +x 00:06:09.781 ************************************ 00:06:09.781 START TEST env_mem_callbacks 00:06:09.781 ************************************ 00:06:09.781 09:21:22 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:06:09.781 EAL: Detected CPU lcores: 88 00:06:09.781 EAL: Detected NUMA nodes: 2 00:06:09.781 EAL: Detected static linkage of DPDK 00:06:09.781 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:09.781 EAL: Selected IOVA mode 'VA' 00:06:09.781 EAL: No free 2048 kB hugepages reported on node 1 00:06:09.781 EAL: VFIO support initialized 00:06:09.781 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:09.781 00:06:09.781 00:06:09.781 CUnit - A unit testing framework for C - Version 2.1-3 00:06:09.781 http://cunit.sourceforge.net/ 00:06:09.781 00:06:09.781 00:06:09.781 Suite: memory 00:06:09.781 Test: test ... 00:06:09.781 register 0x200000200000 2097152 00:06:09.781 malloc 3145728 00:06:09.781 register 0x200000400000 4194304 00:06:09.781 buf 0x200000500000 len 3145728 PASSED 00:06:09.781 malloc 64 00:06:09.781 buf 0x2000004fff40 len 64 PASSED 00:06:09.781 malloc 4194304 00:06:09.781 register 0x200000800000 6291456 00:06:09.781 buf 0x200000a00000 len 4194304 PASSED 00:06:09.781 free 0x200000500000 3145728 00:06:09.781 free 0x2000004fff40 64 00:06:09.781 unregister 0x200000400000 4194304 PASSED 00:06:09.781 free 0x200000a00000 4194304 00:06:09.781 unregister 0x200000800000 6291456 PASSED 00:06:09.781 malloc 8388608 00:06:09.781 register 0x200000400000 10485760 00:06:09.781 buf 0x200000600000 len 8388608 PASSED 00:06:09.781 free 0x200000600000 8388608 00:06:09.781 unregister 0x200000400000 10485760 PASSED 00:06:09.781 passed 00:06:09.781 00:06:09.781 Run Summary: Type Total Ran Passed Failed Inactive 00:06:09.781 suites 1 1 n/a 0 0 00:06:09.781 tests 1 1 1 0 0 00:06:09.781 asserts 15 15 15 0 n/a 00:06:09.781 00:06:09.781 Elapsed time = 0.005 seconds 00:06:09.781 00:06:09.781 real 0m0.055s 00:06:09.781 user 0m0.014s 00:06:09.781 sys 0m0.041s 00:06:09.781 09:21:22 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:09.781 09:21:22 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:06:09.781 ************************************ 00:06:09.781 END TEST env_mem_callbacks 00:06:09.781 ************************************ 00:06:09.781 00:06:09.781 real 0m10.907s 00:06:09.781 user 0m6.909s 00:06:09.781 sys 0m1.036s 00:06:09.781 09:21:22 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:09.781 09:21:22 env -- common/autotest_common.sh@10 -- # set +x 00:06:09.781 ************************************ 00:06:09.781 END TEST env 00:06:09.781 ************************************ 00:06:09.781 09:21:22 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/rpc.sh 00:06:09.781 09:21:22 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:09.781 09:21:22 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:09.781 09:21:22 -- common/autotest_common.sh@10 -- # set +x 00:06:10.040 ************************************ 00:06:10.040 START TEST rpc 00:06:10.040 ************************************ 00:06:10.040 09:21:22 rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/rpc.sh 00:06:10.040 * Looking for test storage... 00:06:10.040 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:06:10.040 09:21:22 rpc -- rpc/rpc.sh@65 -- # spdk_pid=403183 00:06:10.040 09:21:22 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:10.040 09:21:22 rpc -- rpc/rpc.sh@67 -- # waitforlisten 403183 00:06:10.040 09:21:22 rpc -- common/autotest_common.sh@831 -- # '[' -z 403183 ']' 00:06:10.040 09:21:22 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:10.040 09:21:22 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:10.040 09:21:22 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:10.040 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:10.040 09:21:22 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:10.040 09:21:22 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:10.040 09:21:22 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:06:10.040 [2024-07-25 09:21:22.722138] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:10.040 [2024-07-25 09:21:22.722209] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid403183 ] 00:06:10.040 EAL: No free 2048 kB hugepages reported on node 1 00:06:10.040 [2024-07-25 09:21:22.783491] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.300 [2024-07-25 09:21:22.865269] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:06:10.300 [2024-07-25 09:21:22.865303] app.c: 607:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 403183' to capture a snapshot of events at runtime. 00:06:10.300 [2024-07-25 09:21:22.865309] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:10.300 [2024-07-25 09:21:22.865315] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:10.300 [2024-07-25 09:21:22.865320] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid403183 for offline analysis/debug. 00:06:10.300 [2024-07-25 09:21:22.865340] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.866 09:21:23 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:10.866 09:21:23 rpc -- common/autotest_common.sh@864 -- # return 0 00:06:10.866 09:21:23 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:06:10.866 09:21:23 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:06:10.866 09:21:23 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:06:10.866 09:21:23 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:06:10.866 09:21:23 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:10.866 09:21:23 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:10.866 09:21:23 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:10.866 ************************************ 00:06:10.866 START TEST rpc_integrity 00:06:10.866 ************************************ 00:06:10.866 09:21:23 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:06:10.866 09:21:23 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:10.866 09:21:23 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:10.866 09:21:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:10.866 09:21:23 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:10.866 09:21:23 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:10.866 09:21:23 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:10.866 09:21:23 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:10.866 09:21:23 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:10.866 09:21:23 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:10.866 09:21:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:10.866 09:21:23 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:10.866 09:21:23 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:06:10.866 09:21:23 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:10.866 09:21:23 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:10.866 09:21:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:10.866 09:21:23 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:10.866 09:21:23 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:10.866 { 00:06:10.866 "name": "Malloc0", 00:06:10.866 "aliases": [ 00:06:10.866 "3cf3a29e-6bcd-4468-b6c9-d68bc5a05a30" 00:06:10.866 ], 00:06:10.866 "product_name": "Malloc disk", 00:06:10.866 "block_size": 512, 00:06:10.866 "num_blocks": 16384, 00:06:10.866 "uuid": "3cf3a29e-6bcd-4468-b6c9-d68bc5a05a30", 00:06:10.866 "assigned_rate_limits": { 00:06:10.866 "rw_ios_per_sec": 0, 00:06:10.866 "rw_mbytes_per_sec": 0, 00:06:10.866 "r_mbytes_per_sec": 0, 00:06:10.866 "w_mbytes_per_sec": 0 00:06:10.866 }, 00:06:10.866 "claimed": false, 00:06:10.866 "zoned": false, 00:06:10.866 "supported_io_types": { 00:06:10.866 "read": true, 00:06:10.866 "write": true, 00:06:10.866 "unmap": true, 00:06:10.866 "flush": true, 00:06:10.867 "reset": true, 00:06:10.867 "nvme_admin": false, 00:06:10.867 "nvme_io": false, 00:06:10.867 "nvme_io_md": false, 00:06:10.867 "write_zeroes": true, 00:06:10.867 "zcopy": true, 00:06:10.867 "get_zone_info": false, 00:06:10.867 "zone_management": false, 00:06:10.867 "zone_append": false, 00:06:10.867 "compare": false, 00:06:10.867 "compare_and_write": false, 00:06:10.867 "abort": true, 00:06:10.867 "seek_hole": false, 00:06:10.867 "seek_data": false, 00:06:10.867 "copy": true, 00:06:10.867 "nvme_iov_md": false 00:06:10.867 }, 00:06:10.867 "memory_domains": [ 00:06:10.867 { 00:06:10.867 "dma_device_id": "system", 00:06:10.867 "dma_device_type": 1 00:06:10.867 }, 00:06:10.867 { 00:06:10.867 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:10.867 "dma_device_type": 2 00:06:10.867 } 00:06:10.867 ], 00:06:10.867 "driver_specific": {} 00:06:10.867 } 00:06:10.867 ]' 00:06:10.867 09:21:23 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:11.125 09:21:23 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:11.125 09:21:23 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:06:11.125 09:21:23 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:11.125 09:21:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:11.125 [2024-07-25 09:21:23.685491] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:06:11.125 [2024-07-25 09:21:23.685522] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:11.125 [2024-07-25 09:21:23.685535] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x5040240 00:06:11.125 [2024-07-25 09:21:23.685542] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:11.125 [2024-07-25 09:21:23.686313] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:11.125 [2024-07-25 09:21:23.686334] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:11.125 Passthru0 00:06:11.125 09:21:23 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:11.125 09:21:23 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:11.125 09:21:23 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:11.125 09:21:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:11.125 09:21:23 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:11.125 09:21:23 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:11.125 { 00:06:11.125 "name": "Malloc0", 00:06:11.125 "aliases": [ 00:06:11.125 "3cf3a29e-6bcd-4468-b6c9-d68bc5a05a30" 00:06:11.125 ], 00:06:11.125 "product_name": "Malloc disk", 00:06:11.125 "block_size": 512, 00:06:11.125 "num_blocks": 16384, 00:06:11.125 "uuid": "3cf3a29e-6bcd-4468-b6c9-d68bc5a05a30", 00:06:11.125 "assigned_rate_limits": { 00:06:11.125 "rw_ios_per_sec": 0, 00:06:11.125 "rw_mbytes_per_sec": 0, 00:06:11.125 "r_mbytes_per_sec": 0, 00:06:11.125 "w_mbytes_per_sec": 0 00:06:11.125 }, 00:06:11.125 "claimed": true, 00:06:11.125 "claim_type": "exclusive_write", 00:06:11.125 "zoned": false, 00:06:11.125 "supported_io_types": { 00:06:11.125 "read": true, 00:06:11.125 "write": true, 00:06:11.125 "unmap": true, 00:06:11.125 "flush": true, 00:06:11.125 "reset": true, 00:06:11.125 "nvme_admin": false, 00:06:11.125 "nvme_io": false, 00:06:11.125 "nvme_io_md": false, 00:06:11.125 "write_zeroes": true, 00:06:11.125 "zcopy": true, 00:06:11.125 "get_zone_info": false, 00:06:11.125 "zone_management": false, 00:06:11.125 "zone_append": false, 00:06:11.125 "compare": false, 00:06:11.125 "compare_and_write": false, 00:06:11.125 "abort": true, 00:06:11.125 "seek_hole": false, 00:06:11.125 "seek_data": false, 00:06:11.125 "copy": true, 00:06:11.125 "nvme_iov_md": false 00:06:11.125 }, 00:06:11.125 "memory_domains": [ 00:06:11.125 { 00:06:11.125 "dma_device_id": "system", 00:06:11.125 "dma_device_type": 1 00:06:11.125 }, 00:06:11.125 { 00:06:11.125 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:11.125 "dma_device_type": 2 00:06:11.125 } 00:06:11.125 ], 00:06:11.125 "driver_specific": {} 00:06:11.125 }, 00:06:11.125 { 00:06:11.125 "name": "Passthru0", 00:06:11.125 "aliases": [ 00:06:11.125 "e4302b22-a00f-5681-beaa-9193f5355556" 00:06:11.125 ], 00:06:11.125 "product_name": "passthru", 00:06:11.125 "block_size": 512, 00:06:11.125 "num_blocks": 16384, 00:06:11.125 "uuid": "e4302b22-a00f-5681-beaa-9193f5355556", 00:06:11.125 "assigned_rate_limits": { 00:06:11.125 "rw_ios_per_sec": 0, 00:06:11.125 "rw_mbytes_per_sec": 0, 00:06:11.125 "r_mbytes_per_sec": 0, 00:06:11.125 "w_mbytes_per_sec": 0 00:06:11.125 }, 00:06:11.125 "claimed": false, 00:06:11.125 "zoned": false, 00:06:11.125 "supported_io_types": { 00:06:11.125 "read": true, 00:06:11.125 "write": true, 00:06:11.125 "unmap": true, 00:06:11.125 "flush": true, 00:06:11.125 "reset": true, 00:06:11.125 "nvme_admin": false, 00:06:11.125 "nvme_io": false, 00:06:11.125 "nvme_io_md": false, 00:06:11.125 "write_zeroes": true, 00:06:11.125 "zcopy": true, 00:06:11.125 "get_zone_info": false, 00:06:11.125 "zone_management": false, 00:06:11.125 "zone_append": false, 00:06:11.125 "compare": false, 00:06:11.125 "compare_and_write": false, 00:06:11.125 "abort": true, 00:06:11.125 "seek_hole": false, 00:06:11.125 "seek_data": false, 00:06:11.125 "copy": true, 00:06:11.125 "nvme_iov_md": false 00:06:11.125 }, 00:06:11.125 "memory_domains": [ 00:06:11.125 { 00:06:11.125 "dma_device_id": "system", 00:06:11.125 "dma_device_type": 1 00:06:11.125 }, 00:06:11.125 { 00:06:11.125 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:11.125 "dma_device_type": 2 00:06:11.125 } 00:06:11.125 ], 00:06:11.125 "driver_specific": { 00:06:11.125 "passthru": { 00:06:11.125 "name": "Passthru0", 00:06:11.126 "base_bdev_name": "Malloc0" 00:06:11.126 } 00:06:11.126 } 00:06:11.126 } 00:06:11.126 ]' 00:06:11.126 09:21:23 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:11.126 09:21:23 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:11.126 09:21:23 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:11.126 09:21:23 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:11.126 09:21:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:11.126 09:21:23 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:11.126 09:21:23 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:06:11.126 09:21:23 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:11.126 09:21:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:11.126 09:21:23 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:11.126 09:21:23 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:11.126 09:21:23 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:11.126 09:21:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:11.126 09:21:23 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:11.126 09:21:23 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:11.126 09:21:23 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:11.126 09:21:23 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:11.126 00:06:11.126 real 0m0.264s 00:06:11.126 user 0m0.166s 00:06:11.126 sys 0m0.033s 00:06:11.126 09:21:23 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:11.126 09:21:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:11.126 ************************************ 00:06:11.126 END TEST rpc_integrity 00:06:11.126 ************************************ 00:06:11.126 09:21:23 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:06:11.126 09:21:23 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:11.126 09:21:23 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:11.126 09:21:23 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:11.126 ************************************ 00:06:11.126 START TEST rpc_plugins 00:06:11.126 ************************************ 00:06:11.126 09:21:23 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:06:11.126 09:21:23 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:06:11.126 09:21:23 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:11.126 09:21:23 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:11.126 09:21:23 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:11.126 09:21:23 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:06:11.126 09:21:23 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:06:11.126 09:21:23 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:11.126 09:21:23 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:11.126 09:21:23 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:11.126 09:21:23 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:06:11.126 { 00:06:11.126 "name": "Malloc1", 00:06:11.126 "aliases": [ 00:06:11.126 "088739a2-1898-4ba6-9f16-619802f7b7ff" 00:06:11.126 ], 00:06:11.126 "product_name": "Malloc disk", 00:06:11.126 "block_size": 4096, 00:06:11.126 "num_blocks": 256, 00:06:11.126 "uuid": "088739a2-1898-4ba6-9f16-619802f7b7ff", 00:06:11.126 "assigned_rate_limits": { 00:06:11.126 "rw_ios_per_sec": 0, 00:06:11.126 "rw_mbytes_per_sec": 0, 00:06:11.126 "r_mbytes_per_sec": 0, 00:06:11.126 "w_mbytes_per_sec": 0 00:06:11.126 }, 00:06:11.126 "claimed": false, 00:06:11.126 "zoned": false, 00:06:11.126 "supported_io_types": { 00:06:11.126 "read": true, 00:06:11.126 "write": true, 00:06:11.126 "unmap": true, 00:06:11.126 "flush": true, 00:06:11.126 "reset": true, 00:06:11.126 "nvme_admin": false, 00:06:11.126 "nvme_io": false, 00:06:11.126 "nvme_io_md": false, 00:06:11.126 "write_zeroes": true, 00:06:11.126 "zcopy": true, 00:06:11.126 "get_zone_info": false, 00:06:11.126 "zone_management": false, 00:06:11.126 "zone_append": false, 00:06:11.126 "compare": false, 00:06:11.126 "compare_and_write": false, 00:06:11.126 "abort": true, 00:06:11.126 "seek_hole": false, 00:06:11.126 "seek_data": false, 00:06:11.126 "copy": true, 00:06:11.126 "nvme_iov_md": false 00:06:11.126 }, 00:06:11.126 "memory_domains": [ 00:06:11.126 { 00:06:11.126 "dma_device_id": "system", 00:06:11.126 "dma_device_type": 1 00:06:11.126 }, 00:06:11.126 { 00:06:11.126 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:11.126 "dma_device_type": 2 00:06:11.126 } 00:06:11.126 ], 00:06:11.126 "driver_specific": {} 00:06:11.126 } 00:06:11.126 ]' 00:06:11.126 09:21:23 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:06:11.385 09:21:23 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:06:11.385 09:21:23 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:06:11.385 09:21:23 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:11.385 09:21:23 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:11.385 09:21:23 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:11.385 09:21:23 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:06:11.385 09:21:23 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:11.385 09:21:23 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:11.385 09:21:23 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:11.385 09:21:23 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:06:11.385 09:21:23 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:06:11.385 09:21:24 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:06:11.385 00:06:11.385 real 0m0.130s 00:06:11.385 user 0m0.079s 00:06:11.385 sys 0m0.021s 00:06:11.385 09:21:24 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:11.385 09:21:24 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:11.385 ************************************ 00:06:11.385 END TEST rpc_plugins 00:06:11.385 ************************************ 00:06:11.385 09:21:24 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:06:11.385 09:21:24 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:11.385 09:21:24 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:11.385 09:21:24 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:11.385 ************************************ 00:06:11.385 START TEST rpc_trace_cmd_test 00:06:11.385 ************************************ 00:06:11.385 09:21:24 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:06:11.385 09:21:24 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:06:11.385 09:21:24 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:06:11.385 09:21:24 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:11.385 09:21:24 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:11.385 09:21:24 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:11.385 09:21:24 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:06:11.385 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid403183", 00:06:11.385 "tpoint_group_mask": "0x8", 00:06:11.385 "iscsi_conn": { 00:06:11.385 "mask": "0x2", 00:06:11.385 "tpoint_mask": "0x0" 00:06:11.385 }, 00:06:11.385 "scsi": { 00:06:11.385 "mask": "0x4", 00:06:11.385 "tpoint_mask": "0x0" 00:06:11.385 }, 00:06:11.385 "bdev": { 00:06:11.385 "mask": "0x8", 00:06:11.385 "tpoint_mask": "0xffffffffffffffff" 00:06:11.385 }, 00:06:11.385 "nvmf_rdma": { 00:06:11.385 "mask": "0x10", 00:06:11.385 "tpoint_mask": "0x0" 00:06:11.385 }, 00:06:11.385 "nvmf_tcp": { 00:06:11.385 "mask": "0x20", 00:06:11.385 "tpoint_mask": "0x0" 00:06:11.385 }, 00:06:11.385 "ftl": { 00:06:11.385 "mask": "0x40", 00:06:11.385 "tpoint_mask": "0x0" 00:06:11.385 }, 00:06:11.385 "blobfs": { 00:06:11.385 "mask": "0x80", 00:06:11.385 "tpoint_mask": "0x0" 00:06:11.385 }, 00:06:11.385 "dsa": { 00:06:11.385 "mask": "0x200", 00:06:11.385 "tpoint_mask": "0x0" 00:06:11.385 }, 00:06:11.385 "thread": { 00:06:11.385 "mask": "0x400", 00:06:11.385 "tpoint_mask": "0x0" 00:06:11.385 }, 00:06:11.385 "nvme_pcie": { 00:06:11.385 "mask": "0x800", 00:06:11.385 "tpoint_mask": "0x0" 00:06:11.385 }, 00:06:11.385 "iaa": { 00:06:11.385 "mask": "0x1000", 00:06:11.385 "tpoint_mask": "0x0" 00:06:11.385 }, 00:06:11.385 "nvme_tcp": { 00:06:11.385 "mask": "0x2000", 00:06:11.385 "tpoint_mask": "0x0" 00:06:11.385 }, 00:06:11.385 "bdev_nvme": { 00:06:11.385 "mask": "0x4000", 00:06:11.385 "tpoint_mask": "0x0" 00:06:11.385 }, 00:06:11.385 "sock": { 00:06:11.385 "mask": "0x8000", 00:06:11.385 "tpoint_mask": "0x0" 00:06:11.385 } 00:06:11.385 }' 00:06:11.385 09:21:24 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:06:11.386 09:21:24 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:06:11.386 09:21:24 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:06:11.386 09:21:24 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:06:11.386 09:21:24 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:06:11.644 09:21:24 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:06:11.644 09:21:24 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:06:11.644 09:21:24 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:06:11.644 09:21:24 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:06:11.644 09:21:24 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:06:11.644 00:06:11.644 real 0m0.220s 00:06:11.644 user 0m0.190s 00:06:11.644 sys 0m0.022s 00:06:11.644 09:21:24 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:11.644 09:21:24 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:11.644 ************************************ 00:06:11.644 END TEST rpc_trace_cmd_test 00:06:11.644 ************************************ 00:06:11.644 09:21:24 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:06:11.644 09:21:24 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:06:11.644 09:21:24 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:06:11.644 09:21:24 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:11.644 09:21:24 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:11.644 09:21:24 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:11.644 ************************************ 00:06:11.644 START TEST rpc_daemon_integrity 00:06:11.644 ************************************ 00:06:11.644 09:21:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:06:11.644 09:21:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:11.644 09:21:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:11.644 09:21:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:11.644 09:21:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:11.644 09:21:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:11.644 09:21:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:11.644 09:21:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:11.645 09:21:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:11.645 09:21:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:11.645 09:21:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:11.645 09:21:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:11.645 09:21:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:06:11.645 09:21:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:11.645 09:21:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:11.645 09:21:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:11.645 09:21:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:11.645 09:21:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:11.645 { 00:06:11.645 "name": "Malloc2", 00:06:11.645 "aliases": [ 00:06:11.645 "63bdfb33-4a2a-4ea5-a486-f129a419f1d5" 00:06:11.645 ], 00:06:11.645 "product_name": "Malloc disk", 00:06:11.645 "block_size": 512, 00:06:11.645 "num_blocks": 16384, 00:06:11.645 "uuid": "63bdfb33-4a2a-4ea5-a486-f129a419f1d5", 00:06:11.645 "assigned_rate_limits": { 00:06:11.645 "rw_ios_per_sec": 0, 00:06:11.645 "rw_mbytes_per_sec": 0, 00:06:11.645 "r_mbytes_per_sec": 0, 00:06:11.645 "w_mbytes_per_sec": 0 00:06:11.645 }, 00:06:11.645 "claimed": false, 00:06:11.645 "zoned": false, 00:06:11.645 "supported_io_types": { 00:06:11.645 "read": true, 00:06:11.645 "write": true, 00:06:11.645 "unmap": true, 00:06:11.645 "flush": true, 00:06:11.645 "reset": true, 00:06:11.645 "nvme_admin": false, 00:06:11.645 "nvme_io": false, 00:06:11.645 "nvme_io_md": false, 00:06:11.645 "write_zeroes": true, 00:06:11.645 "zcopy": true, 00:06:11.645 "get_zone_info": false, 00:06:11.645 "zone_management": false, 00:06:11.645 "zone_append": false, 00:06:11.645 "compare": false, 00:06:11.645 "compare_and_write": false, 00:06:11.645 "abort": true, 00:06:11.645 "seek_hole": false, 00:06:11.645 "seek_data": false, 00:06:11.645 "copy": true, 00:06:11.645 "nvme_iov_md": false 00:06:11.645 }, 00:06:11.645 "memory_domains": [ 00:06:11.645 { 00:06:11.645 "dma_device_id": "system", 00:06:11.645 "dma_device_type": 1 00:06:11.645 }, 00:06:11.645 { 00:06:11.645 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:11.645 "dma_device_type": 2 00:06:11.645 } 00:06:11.645 ], 00:06:11.645 "driver_specific": {} 00:06:11.645 } 00:06:11.645 ]' 00:06:11.645 09:21:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:11.904 09:21:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:11.904 09:21:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:06:11.904 09:21:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:11.904 09:21:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:11.904 [2024-07-25 09:21:24.487581] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:06:11.904 [2024-07-25 09:21:24.487613] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:11.904 [2024-07-25 09:21:24.487627] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x4ffba10 00:06:11.904 [2024-07-25 09:21:24.487633] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:11.904 [2024-07-25 09:21:24.488355] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:11.904 [2024-07-25 09:21:24.488377] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:11.904 Passthru0 00:06:11.904 09:21:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:11.904 09:21:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:11.904 09:21:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:11.904 09:21:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:11.904 09:21:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:11.904 09:21:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:11.904 { 00:06:11.904 "name": "Malloc2", 00:06:11.904 "aliases": [ 00:06:11.904 "63bdfb33-4a2a-4ea5-a486-f129a419f1d5" 00:06:11.904 ], 00:06:11.904 "product_name": "Malloc disk", 00:06:11.904 "block_size": 512, 00:06:11.904 "num_blocks": 16384, 00:06:11.904 "uuid": "63bdfb33-4a2a-4ea5-a486-f129a419f1d5", 00:06:11.904 "assigned_rate_limits": { 00:06:11.904 "rw_ios_per_sec": 0, 00:06:11.904 "rw_mbytes_per_sec": 0, 00:06:11.904 "r_mbytes_per_sec": 0, 00:06:11.904 "w_mbytes_per_sec": 0 00:06:11.904 }, 00:06:11.905 "claimed": true, 00:06:11.905 "claim_type": "exclusive_write", 00:06:11.905 "zoned": false, 00:06:11.905 "supported_io_types": { 00:06:11.905 "read": true, 00:06:11.905 "write": true, 00:06:11.905 "unmap": true, 00:06:11.905 "flush": true, 00:06:11.905 "reset": true, 00:06:11.905 "nvme_admin": false, 00:06:11.905 "nvme_io": false, 00:06:11.905 "nvme_io_md": false, 00:06:11.905 "write_zeroes": true, 00:06:11.905 "zcopy": true, 00:06:11.905 "get_zone_info": false, 00:06:11.905 "zone_management": false, 00:06:11.905 "zone_append": false, 00:06:11.905 "compare": false, 00:06:11.905 "compare_and_write": false, 00:06:11.905 "abort": true, 00:06:11.905 "seek_hole": false, 00:06:11.905 "seek_data": false, 00:06:11.905 "copy": true, 00:06:11.905 "nvme_iov_md": false 00:06:11.905 }, 00:06:11.905 "memory_domains": [ 00:06:11.905 { 00:06:11.905 "dma_device_id": "system", 00:06:11.905 "dma_device_type": 1 00:06:11.905 }, 00:06:11.905 { 00:06:11.905 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:11.905 "dma_device_type": 2 00:06:11.905 } 00:06:11.905 ], 00:06:11.905 "driver_specific": {} 00:06:11.905 }, 00:06:11.905 { 00:06:11.905 "name": "Passthru0", 00:06:11.905 "aliases": [ 00:06:11.905 "06a51f54-a085-5e7f-b07f-dde9a3e8ab81" 00:06:11.905 ], 00:06:11.905 "product_name": "passthru", 00:06:11.905 "block_size": 512, 00:06:11.905 "num_blocks": 16384, 00:06:11.905 "uuid": "06a51f54-a085-5e7f-b07f-dde9a3e8ab81", 00:06:11.905 "assigned_rate_limits": { 00:06:11.905 "rw_ios_per_sec": 0, 00:06:11.905 "rw_mbytes_per_sec": 0, 00:06:11.905 "r_mbytes_per_sec": 0, 00:06:11.905 "w_mbytes_per_sec": 0 00:06:11.905 }, 00:06:11.905 "claimed": false, 00:06:11.905 "zoned": false, 00:06:11.905 "supported_io_types": { 00:06:11.905 "read": true, 00:06:11.905 "write": true, 00:06:11.905 "unmap": true, 00:06:11.905 "flush": true, 00:06:11.905 "reset": true, 00:06:11.905 "nvme_admin": false, 00:06:11.905 "nvme_io": false, 00:06:11.905 "nvme_io_md": false, 00:06:11.905 "write_zeroes": true, 00:06:11.905 "zcopy": true, 00:06:11.905 "get_zone_info": false, 00:06:11.905 "zone_management": false, 00:06:11.905 "zone_append": false, 00:06:11.905 "compare": false, 00:06:11.905 "compare_and_write": false, 00:06:11.905 "abort": true, 00:06:11.905 "seek_hole": false, 00:06:11.905 "seek_data": false, 00:06:11.905 "copy": true, 00:06:11.905 "nvme_iov_md": false 00:06:11.905 }, 00:06:11.905 "memory_domains": [ 00:06:11.905 { 00:06:11.905 "dma_device_id": "system", 00:06:11.905 "dma_device_type": 1 00:06:11.905 }, 00:06:11.905 { 00:06:11.905 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:11.905 "dma_device_type": 2 00:06:11.905 } 00:06:11.905 ], 00:06:11.905 "driver_specific": { 00:06:11.905 "passthru": { 00:06:11.905 "name": "Passthru0", 00:06:11.905 "base_bdev_name": "Malloc2" 00:06:11.905 } 00:06:11.905 } 00:06:11.905 } 00:06:11.905 ]' 00:06:11.905 09:21:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:11.905 09:21:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:11.905 09:21:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:11.905 09:21:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:11.905 09:21:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:11.905 09:21:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:11.905 09:21:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:06:11.905 09:21:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:11.905 09:21:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:11.905 09:21:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:11.905 09:21:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:11.905 09:21:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:11.905 09:21:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:11.905 09:21:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:11.905 09:21:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:11.905 09:21:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:11.905 09:21:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:11.905 00:06:11.905 real 0m0.252s 00:06:11.905 user 0m0.166s 00:06:11.905 sys 0m0.024s 00:06:11.905 09:21:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:11.905 09:21:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:11.905 ************************************ 00:06:11.905 END TEST rpc_daemon_integrity 00:06:11.905 ************************************ 00:06:11.905 09:21:24 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:06:11.905 09:21:24 rpc -- rpc/rpc.sh@84 -- # killprocess 403183 00:06:11.905 09:21:24 rpc -- common/autotest_common.sh@950 -- # '[' -z 403183 ']' 00:06:11.905 09:21:24 rpc -- common/autotest_common.sh@954 -- # kill -0 403183 00:06:11.905 09:21:24 rpc -- common/autotest_common.sh@955 -- # uname 00:06:11.905 09:21:24 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:11.905 09:21:24 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 403183 00:06:11.905 09:21:24 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:11.905 09:21:24 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:11.905 09:21:24 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 403183' 00:06:11.905 killing process with pid 403183 00:06:11.905 09:21:24 rpc -- common/autotest_common.sh@969 -- # kill 403183 00:06:11.905 09:21:24 rpc -- common/autotest_common.sh@974 -- # wait 403183 00:06:12.474 00:06:12.474 real 0m2.368s 00:06:12.474 user 0m3.048s 00:06:12.474 sys 0m0.631s 00:06:12.474 09:21:24 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:12.474 09:21:24 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:12.474 ************************************ 00:06:12.474 END TEST rpc 00:06:12.474 ************************************ 00:06:12.474 09:21:25 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:06:12.474 09:21:25 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:12.474 09:21:25 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:12.474 09:21:25 -- common/autotest_common.sh@10 -- # set +x 00:06:12.474 ************************************ 00:06:12.474 START TEST skip_rpc 00:06:12.474 ************************************ 00:06:12.474 09:21:25 skip_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:06:12.474 * Looking for test storage... 00:06:12.474 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:06:12.474 09:21:25 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:06:12.474 09:21:25 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/log.txt 00:06:12.474 09:21:25 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:06:12.474 09:21:25 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:12.474 09:21:25 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:12.474 09:21:25 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:12.474 ************************************ 00:06:12.474 START TEST skip_rpc 00:06:12.474 ************************************ 00:06:12.474 09:21:25 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:06:12.474 09:21:25 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=403783 00:06:12.474 09:21:25 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:12.474 09:21:25 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:06:12.474 09:21:25 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:06:12.474 [2024-07-25 09:21:25.183975] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:12.474 [2024-07-25 09:21:25.184044] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid403783 ] 00:06:12.474 EAL: No free 2048 kB hugepages reported on node 1 00:06:12.474 [2024-07-25 09:21:25.240083] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.734 [2024-07-25 09:21:25.316013] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.010 09:21:30 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:06:18.010 09:21:30 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:06:18.010 09:21:30 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:06:18.010 09:21:30 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:18.010 09:21:30 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:18.010 09:21:30 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:18.010 09:21:30 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:18.010 09:21:30 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:06:18.010 09:21:30 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:18.010 09:21:30 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:18.010 09:21:30 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:18.010 09:21:30 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:06:18.010 09:21:30 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:18.010 09:21:30 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:18.010 09:21:30 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:18.010 09:21:30 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:06:18.010 09:21:30 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 403783 00:06:18.010 09:21:30 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 403783 ']' 00:06:18.010 09:21:30 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 403783 00:06:18.010 09:21:30 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:06:18.010 09:21:30 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:18.010 09:21:30 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 403783 00:06:18.010 09:21:30 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:18.010 09:21:30 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:18.010 09:21:30 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 403783' 00:06:18.010 killing process with pid 403783 00:06:18.010 09:21:30 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 403783 00:06:18.010 09:21:30 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 403783 00:06:18.010 00:06:18.010 real 0m5.351s 00:06:18.010 user 0m5.139s 00:06:18.010 sys 0m0.237s 00:06:18.010 09:21:30 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:18.010 09:21:30 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:18.010 ************************************ 00:06:18.010 END TEST skip_rpc 00:06:18.010 ************************************ 00:06:18.010 09:21:30 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:06:18.010 09:21:30 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:18.010 09:21:30 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:18.010 09:21:30 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:18.010 ************************************ 00:06:18.010 START TEST skip_rpc_with_json 00:06:18.010 ************************************ 00:06:18.010 09:21:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:06:18.010 09:21:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:06:18.010 09:21:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=404660 00:06:18.010 09:21:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:18.010 09:21:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:18.010 09:21:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 404660 00:06:18.010 09:21:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 404660 ']' 00:06:18.010 09:21:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:18.010 09:21:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:18.010 09:21:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:18.010 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:18.010 09:21:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:18.010 09:21:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:18.010 [2024-07-25 09:21:30.603270] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:18.010 [2024-07-25 09:21:30.603338] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid404660 ] 00:06:18.010 EAL: No free 2048 kB hugepages reported on node 1 00:06:18.010 [2024-07-25 09:21:30.662658] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.010 [2024-07-25 09:21:30.733480] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.270 09:21:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:18.270 09:21:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:06:18.270 09:21:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:06:18.270 09:21:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:18.270 09:21:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:18.270 [2024-07-25 09:21:30.936087] nvmf_rpc.c:2569:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:06:18.270 request: 00:06:18.270 { 00:06:18.270 "trtype": "tcp", 00:06:18.270 "method": "nvmf_get_transports", 00:06:18.270 "req_id": 1 00:06:18.270 } 00:06:18.270 Got JSON-RPC error response 00:06:18.270 response: 00:06:18.270 { 00:06:18.270 "code": -19, 00:06:18.270 "message": "No such device" 00:06:18.270 } 00:06:18.270 09:21:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:18.270 09:21:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:06:18.270 09:21:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:18.270 09:21:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:18.270 [2024-07-25 09:21:30.948351] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:18.270 09:21:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:18.270 09:21:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:06:18.270 09:21:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:18.270 09:21:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:18.529 09:21:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:18.529 09:21:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:06:18.529 { 00:06:18.529 "subsystems": [ 00:06:18.529 { 00:06:18.529 "subsystem": "scheduler", 00:06:18.529 "config": [ 00:06:18.529 { 00:06:18.529 "method": "framework_set_scheduler", 00:06:18.529 "params": { 00:06:18.529 "name": "static" 00:06:18.529 } 00:06:18.529 } 00:06:18.529 ] 00:06:18.529 }, 00:06:18.529 { 00:06:18.529 "subsystem": "vmd", 00:06:18.529 "config": [] 00:06:18.529 }, 00:06:18.529 { 00:06:18.529 "subsystem": "sock", 00:06:18.529 "config": [ 00:06:18.529 { 00:06:18.529 "method": "sock_set_default_impl", 00:06:18.529 "params": { 00:06:18.529 "impl_name": "posix" 00:06:18.529 } 00:06:18.529 }, 00:06:18.529 { 00:06:18.529 "method": "sock_impl_set_options", 00:06:18.529 "params": { 00:06:18.529 "impl_name": "ssl", 00:06:18.529 "recv_buf_size": 4096, 00:06:18.529 "send_buf_size": 4096, 00:06:18.529 "enable_recv_pipe": true, 00:06:18.529 "enable_quickack": false, 00:06:18.529 "enable_placement_id": 0, 00:06:18.529 "enable_zerocopy_send_server": true, 00:06:18.529 "enable_zerocopy_send_client": false, 00:06:18.529 "zerocopy_threshold": 0, 00:06:18.529 "tls_version": 0, 00:06:18.529 "enable_ktls": false 00:06:18.529 } 00:06:18.529 }, 00:06:18.529 { 00:06:18.529 "method": "sock_impl_set_options", 00:06:18.529 "params": { 00:06:18.529 "impl_name": "posix", 00:06:18.529 "recv_buf_size": 2097152, 00:06:18.529 "send_buf_size": 2097152, 00:06:18.529 "enable_recv_pipe": true, 00:06:18.529 "enable_quickack": false, 00:06:18.529 "enable_placement_id": 0, 00:06:18.529 "enable_zerocopy_send_server": true, 00:06:18.529 "enable_zerocopy_send_client": false, 00:06:18.529 "zerocopy_threshold": 0, 00:06:18.529 "tls_version": 0, 00:06:18.529 "enable_ktls": false 00:06:18.529 } 00:06:18.529 } 00:06:18.529 ] 00:06:18.529 }, 00:06:18.529 { 00:06:18.529 "subsystem": "iobuf", 00:06:18.529 "config": [ 00:06:18.529 { 00:06:18.529 "method": "iobuf_set_options", 00:06:18.529 "params": { 00:06:18.529 "small_pool_count": 8192, 00:06:18.529 "large_pool_count": 1024, 00:06:18.529 "small_bufsize": 8192, 00:06:18.529 "large_bufsize": 135168 00:06:18.529 } 00:06:18.529 } 00:06:18.529 ] 00:06:18.529 }, 00:06:18.529 { 00:06:18.529 "subsystem": "keyring", 00:06:18.529 "config": [] 00:06:18.529 }, 00:06:18.529 { 00:06:18.529 "subsystem": "vfio_user_target", 00:06:18.529 "config": null 00:06:18.529 }, 00:06:18.529 { 00:06:18.529 "subsystem": "accel", 00:06:18.529 "config": [ 00:06:18.529 { 00:06:18.529 "method": "accel_set_options", 00:06:18.529 "params": { 00:06:18.529 "small_cache_size": 128, 00:06:18.529 "large_cache_size": 16, 00:06:18.529 "task_count": 2048, 00:06:18.529 "sequence_count": 2048, 00:06:18.529 "buf_count": 2048 00:06:18.529 } 00:06:18.529 } 00:06:18.529 ] 00:06:18.529 }, 00:06:18.529 { 00:06:18.529 "subsystem": "bdev", 00:06:18.529 "config": [ 00:06:18.529 { 00:06:18.529 "method": "bdev_set_options", 00:06:18.529 "params": { 00:06:18.529 "bdev_io_pool_size": 65535, 00:06:18.529 "bdev_io_cache_size": 256, 00:06:18.529 "bdev_auto_examine": true, 00:06:18.529 "iobuf_small_cache_size": 128, 00:06:18.529 "iobuf_large_cache_size": 16 00:06:18.529 } 00:06:18.529 }, 00:06:18.529 { 00:06:18.529 "method": "bdev_raid_set_options", 00:06:18.529 "params": { 00:06:18.529 "process_window_size_kb": 1024, 00:06:18.529 "process_max_bandwidth_mb_sec": 0 00:06:18.529 } 00:06:18.529 }, 00:06:18.529 { 00:06:18.529 "method": "bdev_nvme_set_options", 00:06:18.529 "params": { 00:06:18.529 "action_on_timeout": "none", 00:06:18.529 "timeout_us": 0, 00:06:18.529 "timeout_admin_us": 0, 00:06:18.529 "keep_alive_timeout_ms": 10000, 00:06:18.529 "arbitration_burst": 0, 00:06:18.529 "low_priority_weight": 0, 00:06:18.530 "medium_priority_weight": 0, 00:06:18.530 "high_priority_weight": 0, 00:06:18.530 "nvme_adminq_poll_period_us": 10000, 00:06:18.530 "nvme_ioq_poll_period_us": 0, 00:06:18.530 "io_queue_requests": 0, 00:06:18.530 "delay_cmd_submit": true, 00:06:18.530 "transport_retry_count": 4, 00:06:18.530 "bdev_retry_count": 3, 00:06:18.530 "transport_ack_timeout": 0, 00:06:18.530 "ctrlr_loss_timeout_sec": 0, 00:06:18.530 "reconnect_delay_sec": 0, 00:06:18.530 "fast_io_fail_timeout_sec": 0, 00:06:18.530 "disable_auto_failback": false, 00:06:18.530 "generate_uuids": false, 00:06:18.530 "transport_tos": 0, 00:06:18.530 "nvme_error_stat": false, 00:06:18.530 "rdma_srq_size": 0, 00:06:18.530 "io_path_stat": false, 00:06:18.530 "allow_accel_sequence": false, 00:06:18.530 "rdma_max_cq_size": 0, 00:06:18.530 "rdma_cm_event_timeout_ms": 0, 00:06:18.530 "dhchap_digests": [ 00:06:18.530 "sha256", 00:06:18.530 "sha384", 00:06:18.530 "sha512" 00:06:18.530 ], 00:06:18.530 "dhchap_dhgroups": [ 00:06:18.530 "null", 00:06:18.530 "ffdhe2048", 00:06:18.530 "ffdhe3072", 00:06:18.530 "ffdhe4096", 00:06:18.530 "ffdhe6144", 00:06:18.530 "ffdhe8192" 00:06:18.530 ] 00:06:18.530 } 00:06:18.530 }, 00:06:18.530 { 00:06:18.530 "method": "bdev_nvme_set_hotplug", 00:06:18.530 "params": { 00:06:18.530 "period_us": 100000, 00:06:18.530 "enable": false 00:06:18.530 } 00:06:18.530 }, 00:06:18.530 { 00:06:18.530 "method": "bdev_iscsi_set_options", 00:06:18.530 "params": { 00:06:18.530 "timeout_sec": 30 00:06:18.530 } 00:06:18.530 }, 00:06:18.530 { 00:06:18.530 "method": "bdev_wait_for_examine" 00:06:18.530 } 00:06:18.530 ] 00:06:18.530 }, 00:06:18.530 { 00:06:18.530 "subsystem": "nvmf", 00:06:18.530 "config": [ 00:06:18.530 { 00:06:18.530 "method": "nvmf_set_config", 00:06:18.530 "params": { 00:06:18.530 "discovery_filter": "match_any", 00:06:18.530 "admin_cmd_passthru": { 00:06:18.530 "identify_ctrlr": false 00:06:18.530 } 00:06:18.530 } 00:06:18.530 }, 00:06:18.530 { 00:06:18.530 "method": "nvmf_set_max_subsystems", 00:06:18.530 "params": { 00:06:18.530 "max_subsystems": 1024 00:06:18.530 } 00:06:18.530 }, 00:06:18.530 { 00:06:18.530 "method": "nvmf_set_crdt", 00:06:18.530 "params": { 00:06:18.530 "crdt1": 0, 00:06:18.530 "crdt2": 0, 00:06:18.530 "crdt3": 0 00:06:18.530 } 00:06:18.530 }, 00:06:18.530 { 00:06:18.530 "method": "nvmf_create_transport", 00:06:18.530 "params": { 00:06:18.530 "trtype": "TCP", 00:06:18.530 "max_queue_depth": 128, 00:06:18.530 "max_io_qpairs_per_ctrlr": 127, 00:06:18.530 "in_capsule_data_size": 4096, 00:06:18.530 "max_io_size": 131072, 00:06:18.530 "io_unit_size": 131072, 00:06:18.530 "max_aq_depth": 128, 00:06:18.530 "num_shared_buffers": 511, 00:06:18.530 "buf_cache_size": 4294967295, 00:06:18.530 "dif_insert_or_strip": false, 00:06:18.530 "zcopy": false, 00:06:18.530 "c2h_success": true, 00:06:18.530 "sock_priority": 0, 00:06:18.530 "abort_timeout_sec": 1, 00:06:18.530 "ack_timeout": 0, 00:06:18.530 "data_wr_pool_size": 0 00:06:18.530 } 00:06:18.530 } 00:06:18.530 ] 00:06:18.530 }, 00:06:18.530 { 00:06:18.530 "subsystem": "nbd", 00:06:18.530 "config": [] 00:06:18.530 }, 00:06:18.530 { 00:06:18.530 "subsystem": "ublk", 00:06:18.530 "config": [] 00:06:18.530 }, 00:06:18.530 { 00:06:18.530 "subsystem": "vhost_blk", 00:06:18.530 "config": [] 00:06:18.530 }, 00:06:18.530 { 00:06:18.530 "subsystem": "scsi", 00:06:18.530 "config": null 00:06:18.530 }, 00:06:18.530 { 00:06:18.530 "subsystem": "iscsi", 00:06:18.530 "config": [ 00:06:18.530 { 00:06:18.530 "method": "iscsi_set_options", 00:06:18.530 "params": { 00:06:18.530 "node_base": "iqn.2016-06.io.spdk", 00:06:18.530 "max_sessions": 128, 00:06:18.530 "max_connections_per_session": 2, 00:06:18.530 "max_queue_depth": 64, 00:06:18.530 "default_time2wait": 2, 00:06:18.530 "default_time2retain": 20, 00:06:18.530 "first_burst_length": 8192, 00:06:18.530 "immediate_data": true, 00:06:18.530 "allow_duplicated_isid": false, 00:06:18.530 "error_recovery_level": 0, 00:06:18.530 "nop_timeout": 60, 00:06:18.530 "nop_in_interval": 30, 00:06:18.530 "disable_chap": false, 00:06:18.530 "require_chap": false, 00:06:18.530 "mutual_chap": false, 00:06:18.530 "chap_group": 0, 00:06:18.530 "max_large_datain_per_connection": 64, 00:06:18.530 "max_r2t_per_connection": 4, 00:06:18.530 "pdu_pool_size": 36864, 00:06:18.530 "immediate_data_pool_size": 16384, 00:06:18.530 "data_out_pool_size": 2048 00:06:18.530 } 00:06:18.530 } 00:06:18.530 ] 00:06:18.530 }, 00:06:18.530 { 00:06:18.530 "subsystem": "vhost_scsi", 00:06:18.530 "config": [] 00:06:18.530 } 00:06:18.530 ] 00:06:18.530 } 00:06:18.530 09:21:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:06:18.530 09:21:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 404660 00:06:18.530 09:21:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 404660 ']' 00:06:18.530 09:21:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 404660 00:06:18.530 09:21:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:06:18.530 09:21:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:18.530 09:21:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 404660 00:06:18.530 09:21:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:18.530 09:21:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:18.530 09:21:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 404660' 00:06:18.530 killing process with pid 404660 00:06:18.530 09:21:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 404660 00:06:18.530 09:21:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 404660 00:06:18.789 09:21:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=404877 00:06:18.789 09:21:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:06:18.789 09:21:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:06:24.075 09:21:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 404877 00:06:24.075 09:21:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 404877 ']' 00:06:24.075 09:21:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 404877 00:06:24.075 09:21:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:06:24.075 09:21:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:24.075 09:21:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 404877 00:06:24.075 09:21:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:24.075 09:21:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:24.075 09:21:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 404877' 00:06:24.075 killing process with pid 404877 00:06:24.075 09:21:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 404877 00:06:24.075 09:21:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 404877 00:06:24.075 09:21:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/log.txt 00:06:24.075 09:21:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/log.txt 00:06:24.075 00:06:24.075 real 0m6.209s 00:06:24.075 user 0m5.903s 00:06:24.075 sys 0m0.542s 00:06:24.075 09:21:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:24.075 09:21:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:24.075 ************************************ 00:06:24.075 END TEST skip_rpc_with_json 00:06:24.075 ************************************ 00:06:24.075 09:21:36 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:06:24.075 09:21:36 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:24.075 09:21:36 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:24.075 09:21:36 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:24.075 ************************************ 00:06:24.075 START TEST skip_rpc_with_delay 00:06:24.075 ************************************ 00:06:24.075 09:21:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:06:24.075 09:21:36 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:24.075 09:21:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:06:24.075 09:21:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:24.075 09:21:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:06:24.075 09:21:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:24.075 09:21:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:06:24.075 09:21:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:24.075 09:21:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:06:24.075 09:21:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:24.075 09:21:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:06:24.076 09:21:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:06:24.076 09:21:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:24.076 [2024-07-25 09:21:36.875209] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:06:24.076 [2024-07-25 09:21:36.875313] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:06:24.335 09:21:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:06:24.335 09:21:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:24.335 09:21:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:24.335 09:21:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:24.335 00:06:24.335 real 0m0.038s 00:06:24.335 user 0m0.023s 00:06:24.335 sys 0m0.015s 00:06:24.335 09:21:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:24.335 09:21:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:06:24.335 ************************************ 00:06:24.335 END TEST skip_rpc_with_delay 00:06:24.335 ************************************ 00:06:24.335 09:21:36 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:06:24.335 09:21:36 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:06:24.335 09:21:36 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:06:24.335 09:21:36 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:24.335 09:21:36 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:24.335 09:21:36 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:24.335 ************************************ 00:06:24.335 START TEST exit_on_failed_rpc_init 00:06:24.335 ************************************ 00:06:24.335 09:21:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:06:24.335 09:21:36 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:24.335 09:21:36 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=405790 00:06:24.335 09:21:36 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 405790 00:06:24.335 09:21:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 405790 ']' 00:06:24.335 09:21:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:24.335 09:21:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:24.335 09:21:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:24.335 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:24.335 09:21:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:24.335 09:21:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:24.335 [2024-07-25 09:21:36.960852] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:24.335 [2024-07-25 09:21:36.960888] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid405790 ] 00:06:24.335 EAL: No free 2048 kB hugepages reported on node 1 00:06:24.335 [2024-07-25 09:21:37.014637] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.335 [2024-07-25 09:21:37.094343] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.595 09:21:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:24.595 09:21:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:06:24.595 09:21:37 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:24.595 09:21:37 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:24.595 09:21:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:06:24.595 09:21:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:24.595 09:21:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:06:24.595 09:21:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:24.595 09:21:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:06:24.595 09:21:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:24.595 09:21:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:06:24.595 09:21:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:24.595 09:21:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:06:24.595 09:21:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:06:24.595 09:21:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:24.595 [2024-07-25 09:21:37.297234] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:24.595 [2024-07-25 09:21:37.297279] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid405797 ] 00:06:24.595 EAL: No free 2048 kB hugepages reported on node 1 00:06:24.595 [2024-07-25 09:21:37.350875] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.854 [2024-07-25 09:21:37.425229] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:24.854 [2024-07-25 09:21:37.425293] rpc.c: 181:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:06:24.854 [2024-07-25 09:21:37.425302] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:06:24.854 [2024-07-25 09:21:37.425308] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:24.854 09:21:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:06:24.854 09:21:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:24.854 09:21:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:06:24.854 09:21:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:06:24.854 09:21:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:06:24.854 09:21:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:24.854 09:21:37 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:24.854 09:21:37 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 405790 00:06:24.854 09:21:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 405790 ']' 00:06:24.854 09:21:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 405790 00:06:24.854 09:21:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:06:24.854 09:21:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:24.854 09:21:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 405790 00:06:24.854 09:21:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:24.854 09:21:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:24.854 09:21:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 405790' 00:06:24.854 killing process with pid 405790 00:06:24.854 09:21:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 405790 00:06:24.854 09:21:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 405790 00:06:25.114 00:06:25.114 real 0m0.865s 00:06:25.114 user 0m0.937s 00:06:25.114 sys 0m0.320s 00:06:25.114 09:21:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:25.114 09:21:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:25.114 ************************************ 00:06:25.114 END TEST exit_on_failed_rpc_init 00:06:25.114 ************************************ 00:06:25.114 09:21:37 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:06:25.114 00:06:25.114 real 0m12.801s 00:06:25.114 user 0m12.141s 00:06:25.114 sys 0m1.335s 00:06:25.114 09:21:37 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:25.114 09:21:37 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:25.114 ************************************ 00:06:25.114 END TEST skip_rpc 00:06:25.114 ************************************ 00:06:25.114 09:21:37 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:06:25.114 09:21:37 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:25.114 09:21:37 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:25.114 09:21:37 -- common/autotest_common.sh@10 -- # set +x 00:06:25.114 ************************************ 00:06:25.114 START TEST rpc_client 00:06:25.114 ************************************ 00:06:25.114 09:21:37 rpc_client -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:06:25.374 * Looking for test storage... 00:06:25.374 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client 00:06:25.374 09:21:37 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:06:25.374 OK 00:06:25.374 09:21:38 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:25.374 00:06:25.374 real 0m0.102s 00:06:25.374 user 0m0.046s 00:06:25.374 sys 0m0.064s 00:06:25.374 09:21:38 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:25.374 09:21:38 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:06:25.374 ************************************ 00:06:25.374 END TEST rpc_client 00:06:25.374 ************************************ 00:06:25.374 09:21:38 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config.sh 00:06:25.374 09:21:38 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:25.374 09:21:38 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:25.374 09:21:38 -- common/autotest_common.sh@10 -- # set +x 00:06:25.374 ************************************ 00:06:25.374 START TEST json_config 00:06:25.374 ************************************ 00:06:25.374 09:21:38 json_config -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config.sh 00:06:25.374 09:21:38 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh 00:06:25.374 09:21:38 json_config -- nvmf/common.sh@7 -- # uname -s 00:06:25.374 09:21:38 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:25.374 09:21:38 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:25.374 09:21:38 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:25.374 09:21:38 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:25.374 09:21:38 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:25.374 09:21:38 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:25.374 09:21:38 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:25.374 09:21:38 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:25.374 09:21:38 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:25.374 09:21:38 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:25.374 09:21:38 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0055cbfc-b15d-ea11-906e-0017a4403562 00:06:25.374 09:21:38 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=0055cbfc-b15d-ea11-906e-0017a4403562 00:06:25.374 09:21:38 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:25.374 09:21:38 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:25.374 09:21:38 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:25.374 09:21:38 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:25.374 09:21:38 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:06:25.374 09:21:38 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:25.374 09:21:38 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:25.374 09:21:38 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:25.374 09:21:38 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:25.374 09:21:38 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:25.374 09:21:38 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:25.374 09:21:38 json_config -- paths/export.sh@5 -- # export PATH 00:06:25.374 09:21:38 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:25.374 09:21:38 json_config -- nvmf/common.sh@47 -- # : 0 00:06:25.374 09:21:38 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:25.374 09:21:38 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:25.374 09:21:38 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:25.374 09:21:38 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:25.374 09:21:38 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:25.374 09:21:38 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:25.374 09:21:38 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:25.374 09:21:38 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:25.374 09:21:38 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/common.sh 00:06:25.374 09:21:38 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:06:25.375 09:21:38 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:06:25.375 09:21:38 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:06:25.375 09:21:38 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:25.375 09:21:38 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:06:25.375 WARNING: No tests are enabled so not running JSON configuration tests 00:06:25.375 09:21:38 json_config -- json_config/json_config.sh@28 -- # exit 0 00:06:25.375 00:06:25.375 real 0m0.091s 00:06:25.375 user 0m0.043s 00:06:25.375 sys 0m0.049s 00:06:25.375 09:21:38 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:25.375 09:21:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:25.375 ************************************ 00:06:25.375 END TEST json_config 00:06:25.375 ************************************ 00:06:25.634 09:21:38 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:25.634 09:21:38 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:25.634 09:21:38 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:25.634 09:21:38 -- common/autotest_common.sh@10 -- # set +x 00:06:25.634 ************************************ 00:06:25.634 START TEST json_config_extra_key 00:06:25.634 ************************************ 00:06:25.634 09:21:38 json_config_extra_key -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:25.634 09:21:38 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh 00:06:25.634 09:21:38 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:25.634 09:21:38 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:25.634 09:21:38 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:25.634 09:21:38 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:25.634 09:21:38 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:25.634 09:21:38 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:25.634 09:21:38 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:25.634 09:21:38 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:25.634 09:21:38 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:25.634 09:21:38 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:25.634 09:21:38 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:25.634 09:21:38 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0055cbfc-b15d-ea11-906e-0017a4403562 00:06:25.635 09:21:38 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=0055cbfc-b15d-ea11-906e-0017a4403562 00:06:25.635 09:21:38 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:25.635 09:21:38 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:25.635 09:21:38 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:25.635 09:21:38 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:25.635 09:21:38 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:06:25.635 09:21:38 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:25.635 09:21:38 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:25.635 09:21:38 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:25.635 09:21:38 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:25.635 09:21:38 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:25.635 09:21:38 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:25.635 09:21:38 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:25.635 09:21:38 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:25.635 09:21:38 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:06:25.635 09:21:38 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:25.635 09:21:38 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:25.635 09:21:38 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:25.635 09:21:38 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:25.635 09:21:38 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:25.635 09:21:38 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:25.635 09:21:38 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:25.635 09:21:38 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:25.635 09:21:38 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/common.sh 00:06:25.635 09:21:38 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:25.635 09:21:38 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:25.635 09:21:38 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:25.635 09:21:38 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:25.635 09:21:38 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:25.635 09:21:38 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:25.635 09:21:38 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/extra_key.json') 00:06:25.635 09:21:38 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:25.635 09:21:38 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:25.635 09:21:38 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:25.635 INFO: launching applications... 00:06:25.635 09:21:38 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/extra_key.json 00:06:25.635 09:21:38 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:25.635 09:21:38 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:25.635 09:21:38 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:25.635 09:21:38 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:25.635 09:21:38 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:25.635 09:21:38 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:25.635 09:21:38 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:25.635 09:21:38 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=406151 00:06:25.635 09:21:38 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:25.635 Waiting for target to run... 00:06:25.635 09:21:38 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 406151 /var/tmp/spdk_tgt.sock 00:06:25.635 09:21:38 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 406151 ']' 00:06:25.635 09:21:38 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:25.635 09:21:38 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:25.635 09:21:38 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:25.635 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:25.635 09:21:38 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:25.635 09:21:38 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:25.635 09:21:38 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/extra_key.json 00:06:25.635 [2024-07-25 09:21:38.322994] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:25.635 [2024-07-25 09:21:38.323063] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid406151 ] 00:06:25.635 EAL: No free 2048 kB hugepages reported on node 1 00:06:25.894 [2024-07-25 09:21:38.587798] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.894 [2024-07-25 09:21:38.651643] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.462 09:21:39 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:26.462 09:21:39 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:06:26.462 09:21:39 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:26.462 00:06:26.462 09:21:39 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:26.462 INFO: shutting down applications... 00:06:26.462 09:21:39 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:26.462 09:21:39 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:26.462 09:21:39 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:26.462 09:21:39 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 406151 ]] 00:06:26.462 09:21:39 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 406151 00:06:26.463 09:21:39 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:26.463 09:21:39 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:26.463 09:21:39 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 406151 00:06:26.463 09:21:39 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:27.033 09:21:39 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:27.033 09:21:39 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:27.033 09:21:39 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 406151 00:06:27.033 09:21:39 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:27.033 09:21:39 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:27.033 09:21:39 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:27.033 09:21:39 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:27.033 SPDK target shutdown done 00:06:27.033 09:21:39 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:27.033 Success 00:06:27.033 00:06:27.033 real 0m1.404s 00:06:27.033 user 0m1.193s 00:06:27.033 sys 0m0.340s 00:06:27.033 09:21:39 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:27.033 09:21:39 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:27.033 ************************************ 00:06:27.033 END TEST json_config_extra_key 00:06:27.033 ************************************ 00:06:27.033 09:21:39 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:27.033 09:21:39 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:27.033 09:21:39 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:27.033 09:21:39 -- common/autotest_common.sh@10 -- # set +x 00:06:27.033 ************************************ 00:06:27.033 START TEST alias_rpc 00:06:27.033 ************************************ 00:06:27.033 09:21:39 alias_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:27.033 * Looking for test storage... 00:06:27.033 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/alias_rpc 00:06:27.033 09:21:39 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:27.033 09:21:39 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=406412 00:06:27.033 09:21:39 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 406412 00:06:27.033 09:21:39 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:06:27.033 09:21:39 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 406412 ']' 00:06:27.033 09:21:39 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:27.033 09:21:39 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:27.033 09:21:39 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:27.033 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:27.033 09:21:39 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:27.033 09:21:39 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:27.033 [2024-07-25 09:21:39.825338] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:27.033 [2024-07-25 09:21:39.825412] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid406412 ] 00:06:27.292 EAL: No free 2048 kB hugepages reported on node 1 00:06:27.292 [2024-07-25 09:21:39.885519] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.292 [2024-07-25 09:21:39.966571] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.860 09:21:40 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:27.860 09:21:40 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:27.860 09:21:40 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py load_config -i 00:06:28.119 09:21:40 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 406412 00:06:28.119 09:21:40 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 406412 ']' 00:06:28.119 09:21:40 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 406412 00:06:28.119 09:21:40 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:06:28.119 09:21:40 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:28.119 09:21:40 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 406412 00:06:28.119 09:21:40 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:28.120 09:21:40 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:28.120 09:21:40 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 406412' 00:06:28.120 killing process with pid 406412 00:06:28.120 09:21:40 alias_rpc -- common/autotest_common.sh@969 -- # kill 406412 00:06:28.120 09:21:40 alias_rpc -- common/autotest_common.sh@974 -- # wait 406412 00:06:28.379 00:06:28.379 real 0m1.461s 00:06:28.379 user 0m1.610s 00:06:28.379 sys 0m0.383s 00:06:28.379 09:21:41 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:28.379 09:21:41 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:28.379 ************************************ 00:06:28.379 END TEST alias_rpc 00:06:28.379 ************************************ 00:06:28.639 09:21:41 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:06:28.639 09:21:41 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:28.639 09:21:41 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:28.639 09:21:41 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:28.639 09:21:41 -- common/autotest_common.sh@10 -- # set +x 00:06:28.639 ************************************ 00:06:28.639 START TEST spdkcli_tcp 00:06:28.639 ************************************ 00:06:28.639 09:21:41 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:28.639 * Looking for test storage... 00:06:28.639 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli 00:06:28.639 09:21:41 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/common.sh 00:06:28.639 09:21:41 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:06:28.639 09:21:41 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/clear_config.py 00:06:28.639 09:21:41 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:28.639 09:21:41 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:28.639 09:21:41 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:28.639 09:21:41 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:28.639 09:21:41 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:28.639 09:21:41 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:28.639 09:21:41 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=406686 00:06:28.639 09:21:41 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 406686 00:06:28.639 09:21:41 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:28.639 09:21:41 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 406686 ']' 00:06:28.639 09:21:41 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:28.639 09:21:41 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:28.639 09:21:41 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:28.639 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:28.639 09:21:41 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:28.639 09:21:41 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:28.639 [2024-07-25 09:21:41.336273] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:28.639 [2024-07-25 09:21:41.336341] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid406686 ] 00:06:28.639 EAL: No free 2048 kB hugepages reported on node 1 00:06:28.639 [2024-07-25 09:21:41.394514] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:28.898 [2024-07-25 09:21:41.471454] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:28.898 [2024-07-25 09:21:41.471456] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.481 09:21:42 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:29.481 09:21:42 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:06:29.481 09:21:42 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:29.481 09:21:42 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=406900 00:06:29.481 09:21:42 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:29.741 [ 00:06:29.741 "spdk_get_version", 00:06:29.741 "rpc_get_methods", 00:06:29.741 "trace_get_info", 00:06:29.741 "trace_get_tpoint_group_mask", 00:06:29.741 "trace_disable_tpoint_group", 00:06:29.741 "trace_enable_tpoint_group", 00:06:29.741 "trace_clear_tpoint_mask", 00:06:29.741 "trace_set_tpoint_mask", 00:06:29.741 "vfu_tgt_set_base_path", 00:06:29.741 "framework_get_pci_devices", 00:06:29.741 "framework_get_config", 00:06:29.741 "framework_get_subsystems", 00:06:29.741 "keyring_get_keys", 00:06:29.741 "iobuf_get_stats", 00:06:29.741 "iobuf_set_options", 00:06:29.741 "sock_get_default_impl", 00:06:29.741 "sock_set_default_impl", 00:06:29.741 "sock_impl_set_options", 00:06:29.741 "sock_impl_get_options", 00:06:29.741 "vmd_rescan", 00:06:29.741 "vmd_remove_device", 00:06:29.741 "vmd_enable", 00:06:29.741 "accel_get_stats", 00:06:29.741 "accel_set_options", 00:06:29.741 "accel_set_driver", 00:06:29.741 "accel_crypto_key_destroy", 00:06:29.741 "accel_crypto_keys_get", 00:06:29.741 "accel_crypto_key_create", 00:06:29.741 "accel_assign_opc", 00:06:29.741 "accel_get_module_info", 00:06:29.741 "accel_get_opc_assignments", 00:06:29.741 "notify_get_notifications", 00:06:29.741 "notify_get_types", 00:06:29.741 "bdev_get_histogram", 00:06:29.741 "bdev_enable_histogram", 00:06:29.741 "bdev_set_qos_limit", 00:06:29.741 "bdev_set_qd_sampling_period", 00:06:29.741 "bdev_get_bdevs", 00:06:29.741 "bdev_reset_iostat", 00:06:29.741 "bdev_get_iostat", 00:06:29.741 "bdev_examine", 00:06:29.741 "bdev_wait_for_examine", 00:06:29.741 "bdev_set_options", 00:06:29.741 "scsi_get_devices", 00:06:29.741 "thread_set_cpumask", 00:06:29.741 "framework_get_governor", 00:06:29.741 "framework_get_scheduler", 00:06:29.741 "framework_set_scheduler", 00:06:29.741 "framework_get_reactors", 00:06:29.741 "thread_get_io_channels", 00:06:29.741 "thread_get_pollers", 00:06:29.741 "thread_get_stats", 00:06:29.741 "framework_monitor_context_switch", 00:06:29.741 "spdk_kill_instance", 00:06:29.741 "log_enable_timestamps", 00:06:29.741 "log_get_flags", 00:06:29.741 "log_clear_flag", 00:06:29.741 "log_set_flag", 00:06:29.741 "log_get_level", 00:06:29.741 "log_set_level", 00:06:29.741 "log_get_print_level", 00:06:29.741 "log_set_print_level", 00:06:29.741 "framework_enable_cpumask_locks", 00:06:29.741 "framework_disable_cpumask_locks", 00:06:29.741 "framework_wait_init", 00:06:29.741 "framework_start_init", 00:06:29.741 "virtio_blk_create_transport", 00:06:29.741 "virtio_blk_get_transports", 00:06:29.741 "vhost_controller_set_coalescing", 00:06:29.741 "vhost_get_controllers", 00:06:29.741 "vhost_delete_controller", 00:06:29.741 "vhost_create_blk_controller", 00:06:29.741 "vhost_scsi_controller_remove_target", 00:06:29.741 "vhost_scsi_controller_add_target", 00:06:29.741 "vhost_start_scsi_controller", 00:06:29.741 "vhost_create_scsi_controller", 00:06:29.741 "ublk_recover_disk", 00:06:29.741 "ublk_get_disks", 00:06:29.741 "ublk_stop_disk", 00:06:29.741 "ublk_start_disk", 00:06:29.741 "ublk_destroy_target", 00:06:29.741 "ublk_create_target", 00:06:29.741 "nbd_get_disks", 00:06:29.741 "nbd_stop_disk", 00:06:29.741 "nbd_start_disk", 00:06:29.741 "env_dpdk_get_mem_stats", 00:06:29.741 "nvmf_stop_mdns_prr", 00:06:29.741 "nvmf_publish_mdns_prr", 00:06:29.741 "nvmf_subsystem_get_listeners", 00:06:29.741 "nvmf_subsystem_get_qpairs", 00:06:29.741 "nvmf_subsystem_get_controllers", 00:06:29.741 "nvmf_get_stats", 00:06:29.741 "nvmf_get_transports", 00:06:29.741 "nvmf_create_transport", 00:06:29.741 "nvmf_get_targets", 00:06:29.741 "nvmf_delete_target", 00:06:29.741 "nvmf_create_target", 00:06:29.741 "nvmf_subsystem_allow_any_host", 00:06:29.741 "nvmf_subsystem_remove_host", 00:06:29.741 "nvmf_subsystem_add_host", 00:06:29.741 "nvmf_ns_remove_host", 00:06:29.741 "nvmf_ns_add_host", 00:06:29.741 "nvmf_subsystem_remove_ns", 00:06:29.741 "nvmf_subsystem_add_ns", 00:06:29.741 "nvmf_subsystem_listener_set_ana_state", 00:06:29.741 "nvmf_discovery_get_referrals", 00:06:29.741 "nvmf_discovery_remove_referral", 00:06:29.741 "nvmf_discovery_add_referral", 00:06:29.741 "nvmf_subsystem_remove_listener", 00:06:29.741 "nvmf_subsystem_add_listener", 00:06:29.741 "nvmf_delete_subsystem", 00:06:29.741 "nvmf_create_subsystem", 00:06:29.741 "nvmf_get_subsystems", 00:06:29.741 "nvmf_set_crdt", 00:06:29.741 "nvmf_set_config", 00:06:29.741 "nvmf_set_max_subsystems", 00:06:29.741 "iscsi_get_histogram", 00:06:29.741 "iscsi_enable_histogram", 00:06:29.741 "iscsi_set_options", 00:06:29.741 "iscsi_get_auth_groups", 00:06:29.741 "iscsi_auth_group_remove_secret", 00:06:29.741 "iscsi_auth_group_add_secret", 00:06:29.741 "iscsi_delete_auth_group", 00:06:29.741 "iscsi_create_auth_group", 00:06:29.741 "iscsi_set_discovery_auth", 00:06:29.741 "iscsi_get_options", 00:06:29.741 "iscsi_target_node_request_logout", 00:06:29.741 "iscsi_target_node_set_redirect", 00:06:29.741 "iscsi_target_node_set_auth", 00:06:29.741 "iscsi_target_node_add_lun", 00:06:29.741 "iscsi_get_stats", 00:06:29.741 "iscsi_get_connections", 00:06:29.741 "iscsi_portal_group_set_auth", 00:06:29.741 "iscsi_start_portal_group", 00:06:29.741 "iscsi_delete_portal_group", 00:06:29.741 "iscsi_create_portal_group", 00:06:29.741 "iscsi_get_portal_groups", 00:06:29.741 "iscsi_delete_target_node", 00:06:29.741 "iscsi_target_node_remove_pg_ig_maps", 00:06:29.741 "iscsi_target_node_add_pg_ig_maps", 00:06:29.741 "iscsi_create_target_node", 00:06:29.741 "iscsi_get_target_nodes", 00:06:29.741 "iscsi_delete_initiator_group", 00:06:29.741 "iscsi_initiator_group_remove_initiators", 00:06:29.741 "iscsi_initiator_group_add_initiators", 00:06:29.741 "iscsi_create_initiator_group", 00:06:29.741 "iscsi_get_initiator_groups", 00:06:29.741 "keyring_linux_set_options", 00:06:29.741 "keyring_file_remove_key", 00:06:29.741 "keyring_file_add_key", 00:06:29.741 "vfu_virtio_create_scsi_endpoint", 00:06:29.741 "vfu_virtio_scsi_remove_target", 00:06:29.741 "vfu_virtio_scsi_add_target", 00:06:29.741 "vfu_virtio_create_blk_endpoint", 00:06:29.741 "vfu_virtio_delete_endpoint", 00:06:29.741 "iaa_scan_accel_module", 00:06:29.741 "dsa_scan_accel_module", 00:06:29.741 "ioat_scan_accel_module", 00:06:29.741 "accel_error_inject_error", 00:06:29.741 "bdev_iscsi_delete", 00:06:29.741 "bdev_iscsi_create", 00:06:29.741 "bdev_iscsi_set_options", 00:06:29.742 "bdev_virtio_attach_controller", 00:06:29.742 "bdev_virtio_scsi_get_devices", 00:06:29.742 "bdev_virtio_detach_controller", 00:06:29.742 "bdev_virtio_blk_set_hotplug", 00:06:29.742 "bdev_ftl_set_property", 00:06:29.742 "bdev_ftl_get_properties", 00:06:29.742 "bdev_ftl_get_stats", 00:06:29.742 "bdev_ftl_unmap", 00:06:29.742 "bdev_ftl_unload", 00:06:29.742 "bdev_ftl_delete", 00:06:29.742 "bdev_ftl_load", 00:06:29.742 "bdev_ftl_create", 00:06:29.742 "bdev_aio_delete", 00:06:29.742 "bdev_aio_rescan", 00:06:29.742 "bdev_aio_create", 00:06:29.742 "blobfs_create", 00:06:29.742 "blobfs_detect", 00:06:29.742 "blobfs_set_cache_size", 00:06:29.742 "bdev_zone_block_delete", 00:06:29.742 "bdev_zone_block_create", 00:06:29.742 "bdev_delay_delete", 00:06:29.742 "bdev_delay_create", 00:06:29.742 "bdev_delay_update_latency", 00:06:29.742 "bdev_split_delete", 00:06:29.742 "bdev_split_create", 00:06:29.742 "bdev_error_inject_error", 00:06:29.742 "bdev_error_delete", 00:06:29.742 "bdev_error_create", 00:06:29.742 "bdev_raid_set_options", 00:06:29.742 "bdev_raid_remove_base_bdev", 00:06:29.742 "bdev_raid_add_base_bdev", 00:06:29.742 "bdev_raid_delete", 00:06:29.742 "bdev_raid_create", 00:06:29.742 "bdev_raid_get_bdevs", 00:06:29.742 "bdev_lvol_set_parent_bdev", 00:06:29.742 "bdev_lvol_set_parent", 00:06:29.742 "bdev_lvol_check_shallow_copy", 00:06:29.742 "bdev_lvol_start_shallow_copy", 00:06:29.742 "bdev_lvol_grow_lvstore", 00:06:29.742 "bdev_lvol_get_lvols", 00:06:29.742 "bdev_lvol_get_lvstores", 00:06:29.742 "bdev_lvol_delete", 00:06:29.742 "bdev_lvol_set_read_only", 00:06:29.742 "bdev_lvol_resize", 00:06:29.742 "bdev_lvol_decouple_parent", 00:06:29.742 "bdev_lvol_inflate", 00:06:29.742 "bdev_lvol_rename", 00:06:29.742 "bdev_lvol_clone_bdev", 00:06:29.742 "bdev_lvol_clone", 00:06:29.742 "bdev_lvol_snapshot", 00:06:29.742 "bdev_lvol_create", 00:06:29.742 "bdev_lvol_delete_lvstore", 00:06:29.742 "bdev_lvol_rename_lvstore", 00:06:29.742 "bdev_lvol_create_lvstore", 00:06:29.742 "bdev_passthru_delete", 00:06:29.742 "bdev_passthru_create", 00:06:29.742 "bdev_nvme_cuse_unregister", 00:06:29.742 "bdev_nvme_cuse_register", 00:06:29.742 "bdev_opal_new_user", 00:06:29.742 "bdev_opal_set_lock_state", 00:06:29.742 "bdev_opal_delete", 00:06:29.742 "bdev_opal_get_info", 00:06:29.742 "bdev_opal_create", 00:06:29.742 "bdev_nvme_opal_revert", 00:06:29.742 "bdev_nvme_opal_init", 00:06:29.742 "bdev_nvme_send_cmd", 00:06:29.742 "bdev_nvme_get_path_iostat", 00:06:29.742 "bdev_nvme_get_mdns_discovery_info", 00:06:29.742 "bdev_nvme_stop_mdns_discovery", 00:06:29.742 "bdev_nvme_start_mdns_discovery", 00:06:29.742 "bdev_nvme_set_multipath_policy", 00:06:29.742 "bdev_nvme_set_preferred_path", 00:06:29.742 "bdev_nvme_get_io_paths", 00:06:29.742 "bdev_nvme_remove_error_injection", 00:06:29.742 "bdev_nvme_add_error_injection", 00:06:29.742 "bdev_nvme_get_discovery_info", 00:06:29.742 "bdev_nvme_stop_discovery", 00:06:29.742 "bdev_nvme_start_discovery", 00:06:29.742 "bdev_nvme_get_controller_health_info", 00:06:29.742 "bdev_nvme_disable_controller", 00:06:29.742 "bdev_nvme_enable_controller", 00:06:29.742 "bdev_nvme_reset_controller", 00:06:29.742 "bdev_nvme_get_transport_statistics", 00:06:29.742 "bdev_nvme_apply_firmware", 00:06:29.742 "bdev_nvme_detach_controller", 00:06:29.742 "bdev_nvme_get_controllers", 00:06:29.742 "bdev_nvme_attach_controller", 00:06:29.742 "bdev_nvme_set_hotplug", 00:06:29.742 "bdev_nvme_set_options", 00:06:29.742 "bdev_null_resize", 00:06:29.742 "bdev_null_delete", 00:06:29.742 "bdev_null_create", 00:06:29.742 "bdev_malloc_delete", 00:06:29.742 "bdev_malloc_create" 00:06:29.742 ] 00:06:29.742 09:21:42 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:29.742 09:21:42 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:29.742 09:21:42 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:29.742 09:21:42 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:29.742 09:21:42 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 406686 00:06:29.742 09:21:42 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 406686 ']' 00:06:29.742 09:21:42 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 406686 00:06:29.742 09:21:42 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:06:29.742 09:21:42 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:29.742 09:21:42 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 406686 00:06:29.742 09:21:42 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:29.742 09:21:42 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:29.742 09:21:42 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 406686' 00:06:29.742 killing process with pid 406686 00:06:29.742 09:21:42 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 406686 00:06:29.742 09:21:42 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 406686 00:06:30.002 00:06:30.002 real 0m1.446s 00:06:30.002 user 0m2.735s 00:06:30.002 sys 0m0.396s 00:06:30.002 09:21:42 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:30.002 09:21:42 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:30.002 ************************************ 00:06:30.002 END TEST spdkcli_tcp 00:06:30.002 ************************************ 00:06:30.002 09:21:42 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:30.002 09:21:42 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:30.002 09:21:42 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:30.002 09:21:42 -- common/autotest_common.sh@10 -- # set +x 00:06:30.002 ************************************ 00:06:30.002 START TEST dpdk_mem_utility 00:06:30.002 ************************************ 00:06:30.002 09:21:42 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:30.262 * Looking for test storage... 00:06:30.262 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/dpdk_memory_utility 00:06:30.262 09:21:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:30.262 09:21:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=406973 00:06:30.262 09:21:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 406973 00:06:30.262 09:21:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:06:30.262 09:21:42 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 406973 ']' 00:06:30.262 09:21:42 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:30.262 09:21:42 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:30.262 09:21:42 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:30.262 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:30.262 09:21:42 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:30.262 09:21:42 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:30.262 [2024-07-25 09:21:42.852281] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:30.262 [2024-07-25 09:21:42.852350] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid406973 ] 00:06:30.262 EAL: No free 2048 kB hugepages reported on node 1 00:06:30.262 [2024-07-25 09:21:42.911658] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.262 [2024-07-25 09:21:42.988330] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.201 09:21:43 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:31.201 09:21:43 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:06:31.201 09:21:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:31.201 09:21:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:31.201 09:21:43 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:31.201 09:21:43 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:31.201 { 00:06:31.201 "filename": "/tmp/spdk_mem_dump.txt" 00:06:31.201 } 00:06:31.201 09:21:43 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:31.201 09:21:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:31.201 DPDK memory size 814.000000 MiB in 1 heap(s) 00:06:31.201 1 heaps totaling size 814.000000 MiB 00:06:31.201 size: 814.000000 MiB heap id: 0 00:06:31.201 end heaps---------- 00:06:31.201 8 mempools totaling size 598.116089 MiB 00:06:31.201 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:31.201 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:31.201 size: 84.521057 MiB name: bdev_io_406973 00:06:31.201 size: 51.011292 MiB name: evtpool_406973 00:06:31.201 size: 50.003479 MiB name: msgpool_406973 00:06:31.201 size: 21.763794 MiB name: PDU_Pool 00:06:31.201 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:31.201 size: 0.026123 MiB name: Session_Pool 00:06:31.201 end mempools------- 00:06:31.201 6 memzones totaling size 4.142822 MiB 00:06:31.201 size: 1.000366 MiB name: RG_ring_0_406973 00:06:31.201 size: 1.000366 MiB name: RG_ring_1_406973 00:06:31.201 size: 1.000366 MiB name: RG_ring_4_406973 00:06:31.201 size: 1.000366 MiB name: RG_ring_5_406973 00:06:31.201 size: 0.125366 MiB name: RG_ring_2_406973 00:06:31.201 size: 0.015991 MiB name: RG_ring_3_406973 00:06:31.201 end memzones------- 00:06:31.201 09:21:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:06:31.201 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:06:31.201 list of free elements. size: 12.519348 MiB 00:06:31.201 element at address: 0x200000400000 with size: 1.999512 MiB 00:06:31.201 element at address: 0x200018e00000 with size: 0.999878 MiB 00:06:31.202 element at address: 0x200019000000 with size: 0.999878 MiB 00:06:31.202 element at address: 0x200003e00000 with size: 0.996277 MiB 00:06:31.202 element at address: 0x200031c00000 with size: 0.994446 MiB 00:06:31.202 element at address: 0x200013800000 with size: 0.978699 MiB 00:06:31.202 element at address: 0x200007000000 with size: 0.959839 MiB 00:06:31.202 element at address: 0x200019200000 with size: 0.936584 MiB 00:06:31.202 element at address: 0x200000200000 with size: 0.841614 MiB 00:06:31.202 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:06:31.202 element at address: 0x20000b200000 with size: 0.490723 MiB 00:06:31.202 element at address: 0x200000800000 with size: 0.487793 MiB 00:06:31.202 element at address: 0x200019400000 with size: 0.485657 MiB 00:06:31.202 element at address: 0x200027e00000 with size: 0.410034 MiB 00:06:31.202 element at address: 0x200003a00000 with size: 0.355530 MiB 00:06:31.202 list of standard malloc elements. size: 199.218079 MiB 00:06:31.202 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:06:31.202 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:06:31.202 element at address: 0x200018efff80 with size: 1.000122 MiB 00:06:31.202 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:06:31.202 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:06:31.202 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:31.202 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:06:31.202 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:31.202 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:06:31.202 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:06:31.202 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:06:31.202 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:06:31.202 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:06:31.202 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:06:31.202 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:31.202 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:31.202 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:06:31.202 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:06:31.202 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:06:31.202 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:06:31.202 element at address: 0x200003adb300 with size: 0.000183 MiB 00:06:31.202 element at address: 0x200003adb500 with size: 0.000183 MiB 00:06:31.202 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:06:31.202 element at address: 0x200003affa80 with size: 0.000183 MiB 00:06:31.202 element at address: 0x200003affb40 with size: 0.000183 MiB 00:06:31.202 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:06:31.202 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:06:31.202 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:06:31.202 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:06:31.202 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:06:31.202 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:06:31.202 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:06:31.202 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:06:31.202 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:06:31.202 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:06:31.202 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:06:31.202 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:06:31.202 element at address: 0x200027e69040 with size: 0.000183 MiB 00:06:31.202 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:06:31.202 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:06:31.202 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:06:31.202 list of memzone associated elements. size: 602.262573 MiB 00:06:31.202 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:06:31.202 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:31.202 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:06:31.202 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:31.202 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:06:31.202 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_406973_0 00:06:31.202 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:06:31.202 associated memzone info: size: 48.002930 MiB name: MP_evtpool_406973_0 00:06:31.202 element at address: 0x200003fff380 with size: 48.003052 MiB 00:06:31.202 associated memzone info: size: 48.002930 MiB name: MP_msgpool_406973_0 00:06:31.202 element at address: 0x2000195be940 with size: 20.255554 MiB 00:06:31.202 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:31.202 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:06:31.202 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:31.202 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:06:31.202 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_406973 00:06:31.202 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:06:31.202 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_406973 00:06:31.202 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:31.202 associated memzone info: size: 1.007996 MiB name: MP_evtpool_406973 00:06:31.202 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:06:31.202 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:31.202 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:06:31.202 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:31.202 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:06:31.202 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:31.202 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:06:31.202 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:31.202 element at address: 0x200003eff180 with size: 1.000488 MiB 00:06:31.202 associated memzone info: size: 1.000366 MiB name: RG_ring_0_406973 00:06:31.202 element at address: 0x200003affc00 with size: 1.000488 MiB 00:06:31.202 associated memzone info: size: 1.000366 MiB name: RG_ring_1_406973 00:06:31.202 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:06:31.202 associated memzone info: size: 1.000366 MiB name: RG_ring_4_406973 00:06:31.202 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:06:31.202 associated memzone info: size: 1.000366 MiB name: RG_ring_5_406973 00:06:31.202 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:06:31.202 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_406973 00:06:31.202 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:06:31.202 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:31.202 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:06:31.202 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:31.202 element at address: 0x20001947c540 with size: 0.250488 MiB 00:06:31.202 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:31.202 element at address: 0x200003adf880 with size: 0.125488 MiB 00:06:31.202 associated memzone info: size: 0.125366 MiB name: RG_ring_2_406973 00:06:31.202 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:06:31.202 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:31.202 element at address: 0x200027e69100 with size: 0.023743 MiB 00:06:31.202 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:31.202 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:06:31.202 associated memzone info: size: 0.015991 MiB name: RG_ring_3_406973 00:06:31.202 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:06:31.202 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:31.202 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:06:31.202 associated memzone info: size: 0.000183 MiB name: MP_msgpool_406973 00:06:31.202 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:06:31.202 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_406973 00:06:31.202 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:06:31.202 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:31.202 09:21:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:31.202 09:21:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 406973 00:06:31.202 09:21:43 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 406973 ']' 00:06:31.202 09:21:43 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 406973 00:06:31.202 09:21:43 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:06:31.202 09:21:43 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:31.202 09:21:43 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 406973 00:06:31.202 09:21:43 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:31.202 09:21:43 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:31.202 09:21:43 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 406973' 00:06:31.202 killing process with pid 406973 00:06:31.202 09:21:43 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 406973 00:06:31.202 09:21:43 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 406973 00:06:31.462 00:06:31.462 real 0m1.354s 00:06:31.462 user 0m1.437s 00:06:31.462 sys 0m0.372s 00:06:31.462 09:21:44 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:31.462 09:21:44 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:31.462 ************************************ 00:06:31.462 END TEST dpdk_mem_utility 00:06:31.462 ************************************ 00:06:31.462 09:21:44 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event.sh 00:06:31.462 09:21:44 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:31.462 09:21:44 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:31.462 09:21:44 -- common/autotest_common.sh@10 -- # set +x 00:06:31.462 ************************************ 00:06:31.462 START TEST event 00:06:31.462 ************************************ 00:06:31.462 09:21:44 event -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event.sh 00:06:31.462 * Looking for test storage... 00:06:31.462 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event 00:06:31.462 09:21:44 event -- event/event.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/bdev/nbd_common.sh 00:06:31.462 09:21:44 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:31.462 09:21:44 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:31.462 09:21:44 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:06:31.462 09:21:44 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:31.462 09:21:44 event -- common/autotest_common.sh@10 -- # set +x 00:06:31.722 ************************************ 00:06:31.722 START TEST event_perf 00:06:31.722 ************************************ 00:06:31.722 09:21:44 event.event_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:31.722 Running I/O for 1 seconds...[2024-07-25 09:21:44.308062] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:31.722 [2024-07-25 09:21:44.308138] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid407275 ] 00:06:31.722 EAL: No free 2048 kB hugepages reported on node 1 00:06:31.722 [2024-07-25 09:21:44.369803] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:31.722 [2024-07-25 09:21:44.446485] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:31.722 [2024-07-25 09:21:44.446578] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:31.722 [2024-07-25 09:21:44.446643] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:31.722 [2024-07-25 09:21:44.446644] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.100 Running I/O for 1 seconds... 00:06:33.100 lcore 0: 187872 00:06:33.100 lcore 1: 187870 00:06:33.100 lcore 2: 187870 00:06:33.100 lcore 3: 187870 00:06:33.100 done. 00:06:33.100 00:06:33.100 real 0m1.221s 00:06:33.100 user 0m4.133s 00:06:33.100 sys 0m0.085s 00:06:33.100 09:21:45 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:33.100 09:21:45 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:33.100 ************************************ 00:06:33.100 END TEST event_perf 00:06:33.100 ************************************ 00:06:33.100 09:21:45 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:33.100 09:21:45 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:06:33.100 09:21:45 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:33.100 09:21:45 event -- common/autotest_common.sh@10 -- # set +x 00:06:33.100 ************************************ 00:06:33.100 START TEST event_reactor 00:06:33.100 ************************************ 00:06:33.100 09:21:45 event.event_reactor -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:33.100 [2024-07-25 09:21:45.592726] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:33.100 [2024-07-25 09:21:45.592795] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid407491 ] 00:06:33.100 EAL: No free 2048 kB hugepages reported on node 1 00:06:33.100 [2024-07-25 09:21:45.654196] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.100 [2024-07-25 09:21:45.733969] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.038 test_start 00:06:34.038 oneshot 00:06:34.038 tick 100 00:06:34.038 tick 100 00:06:34.038 tick 250 00:06:34.038 tick 100 00:06:34.038 tick 100 00:06:34.038 tick 100 00:06:34.038 tick 250 00:06:34.038 tick 500 00:06:34.038 tick 100 00:06:34.038 tick 100 00:06:34.038 tick 250 00:06:34.038 tick 100 00:06:34.038 tick 100 00:06:34.038 test_end 00:06:34.038 00:06:34.038 real 0m1.220s 00:06:34.038 user 0m1.136s 00:06:34.038 sys 0m0.080s 00:06:34.038 09:21:46 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:34.038 09:21:46 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:34.038 ************************************ 00:06:34.038 END TEST event_reactor 00:06:34.038 ************************************ 00:06:34.038 09:21:46 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:34.038 09:21:46 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:06:34.038 09:21:46 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:34.038 09:21:46 event -- common/autotest_common.sh@10 -- # set +x 00:06:34.297 ************************************ 00:06:34.297 START TEST event_reactor_perf 00:06:34.297 ************************************ 00:06:34.297 09:21:46 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:34.297 [2024-07-25 09:21:46.873260] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:34.297 [2024-07-25 09:21:46.873332] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid407715 ] 00:06:34.297 EAL: No free 2048 kB hugepages reported on node 1 00:06:34.297 [2024-07-25 09:21:46.931992] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.297 [2024-07-25 09:21:47.003048] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.677 test_start 00:06:35.677 test_end 00:06:35.677 Performance: 929531 events per second 00:06:35.677 00:06:35.677 real 0m1.205s 00:06:35.677 user 0m1.131s 00:06:35.677 sys 0m0.070s 00:06:35.677 09:21:48 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:35.677 09:21:48 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:35.677 ************************************ 00:06:35.677 END TEST event_reactor_perf 00:06:35.677 ************************************ 00:06:35.677 09:21:48 event -- event/event.sh@49 -- # uname -s 00:06:35.677 09:21:48 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:35.677 09:21:48 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:35.677 09:21:48 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:35.677 09:21:48 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:35.677 09:21:48 event -- common/autotest_common.sh@10 -- # set +x 00:06:35.677 ************************************ 00:06:35.677 START TEST event_scheduler 00:06:35.677 ************************************ 00:06:35.677 09:21:48 event.event_scheduler -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:35.677 * Looking for test storage... 00:06:35.677 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler 00:06:35.677 09:21:48 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:35.677 09:21:48 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=407979 00:06:35.677 09:21:48 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:35.677 09:21:48 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 407979 00:06:35.677 09:21:48 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 407979 ']' 00:06:35.677 09:21:48 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:35.677 09:21:48 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:35.677 09:21:48 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:35.677 09:21:48 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:35.677 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:35.677 09:21:48 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:35.677 09:21:48 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:35.677 [2024-07-25 09:21:48.248864] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:35.677 [2024-07-25 09:21:48.248945] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid407979 ] 00:06:35.677 EAL: No free 2048 kB hugepages reported on node 1 00:06:35.677 [2024-07-25 09:21:48.303360] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:35.677 [2024-07-25 09:21:48.388289] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.677 [2024-07-25 09:21:48.388374] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:35.677 [2024-07-25 09:21:48.388457] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:35.677 [2024-07-25 09:21:48.388458] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:36.615 09:21:49 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:36.615 09:21:49 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:06:36.615 09:21:49 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:36.615 09:21:49 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:36.615 09:21:49 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:36.615 [2024-07-25 09:21:49.086823] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:06:36.615 [2024-07-25 09:21:49.086842] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:06:36.615 [2024-07-25 09:21:49.086852] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:36.615 [2024-07-25 09:21:49.086858] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:36.615 [2024-07-25 09:21:49.086863] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:36.615 09:21:49 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:36.615 09:21:49 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:36.615 09:21:49 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:36.615 09:21:49 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:36.615 [2024-07-25 09:21:49.156873] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:36.615 09:21:49 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:36.615 09:21:49 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:36.615 09:21:49 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:36.615 09:21:49 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:36.615 09:21:49 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:36.615 ************************************ 00:06:36.615 START TEST scheduler_create_thread 00:06:36.615 ************************************ 00:06:36.615 09:21:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:06:36.615 09:21:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:36.615 09:21:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:36.615 09:21:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:36.615 2 00:06:36.615 09:21:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:36.615 09:21:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:36.615 09:21:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:36.615 09:21:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:36.615 3 00:06:36.615 09:21:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:36.615 09:21:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:36.615 09:21:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:36.615 09:21:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:36.615 4 00:06:36.615 09:21:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:36.615 09:21:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:36.615 09:21:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:36.615 09:21:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:36.615 5 00:06:36.615 09:21:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:36.615 09:21:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:36.615 09:21:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:36.615 09:21:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:36.615 6 00:06:36.615 09:21:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:36.615 09:21:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:36.615 09:21:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:36.615 09:21:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:36.615 7 00:06:36.615 09:21:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:36.615 09:21:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:36.615 09:21:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:36.615 09:21:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:36.615 8 00:06:36.615 09:21:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:36.615 09:21:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:36.615 09:21:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:36.615 09:21:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:36.615 9 00:06:36.615 09:21:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:36.615 09:21:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:36.615 09:21:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:36.615 09:21:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:36.615 10 00:06:36.615 09:21:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:36.615 09:21:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:36.615 09:21:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:36.615 09:21:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:36.615 09:21:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:36.615 09:21:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:36.615 09:21:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:36.615 09:21:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:36.615 09:21:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:37.184 09:21:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:37.184 09:21:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:37.184 09:21:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:37.184 09:21:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:38.560 09:21:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:38.560 09:21:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:38.560 09:21:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:38.560 09:21:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:38.560 09:21:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:39.493 09:21:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:39.493 00:06:39.493 real 0m3.099s 00:06:39.493 user 0m0.024s 00:06:39.493 sys 0m0.005s 00:06:39.493 09:21:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:39.493 09:21:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:39.493 ************************************ 00:06:39.493 END TEST scheduler_create_thread 00:06:39.493 ************************************ 00:06:39.751 09:21:52 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:39.751 09:21:52 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 407979 00:06:39.751 09:21:52 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 407979 ']' 00:06:39.751 09:21:52 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 407979 00:06:39.751 09:21:52 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:06:39.751 09:21:52 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:39.751 09:21:52 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 407979 00:06:39.751 09:21:52 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:06:39.751 09:21:52 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:06:39.751 09:21:52 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 407979' 00:06:39.751 killing process with pid 407979 00:06:39.751 09:21:52 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 407979 00:06:39.752 09:21:52 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 407979 00:06:40.011 [2024-07-25 09:21:52.667911] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:40.270 00:06:40.270 real 0m4.730s 00:06:40.270 user 0m9.309s 00:06:40.270 sys 0m0.329s 00:06:40.270 09:21:52 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:40.270 09:21:52 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:40.270 ************************************ 00:06:40.270 END TEST event_scheduler 00:06:40.270 ************************************ 00:06:40.270 09:21:52 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:40.270 09:21:52 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:40.270 09:21:52 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:40.270 09:21:52 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:40.270 09:21:52 event -- common/autotest_common.sh@10 -- # set +x 00:06:40.270 ************************************ 00:06:40.270 START TEST app_repeat 00:06:40.270 ************************************ 00:06:40.270 09:21:52 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:06:40.270 09:21:52 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:40.270 09:21:52 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:40.270 09:21:52 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:40.270 09:21:52 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:40.270 09:21:52 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:40.270 09:21:52 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:40.270 09:21:52 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:40.270 09:21:52 event.app_repeat -- event/event.sh@19 -- # repeat_pid=408871 00:06:40.270 09:21:52 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:40.270 09:21:52 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:40.270 09:21:52 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 408871' 00:06:40.270 Process app_repeat pid: 408871 00:06:40.270 09:21:52 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:40.270 09:21:52 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:40.270 spdk_app_start Round 0 00:06:40.270 09:21:52 event.app_repeat -- event/event.sh@25 -- # waitforlisten 408871 /var/tmp/spdk-nbd.sock 00:06:40.270 09:21:52 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 408871 ']' 00:06:40.270 09:21:52 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:40.270 09:21:52 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:40.270 09:21:52 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:40.270 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:40.270 09:21:52 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:40.270 09:21:52 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:40.270 [2024-07-25 09:21:52.952332] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:40.270 [2024-07-25 09:21:52.952413] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid408871 ] 00:06:40.270 EAL: No free 2048 kB hugepages reported on node 1 00:06:40.270 [2024-07-25 09:21:53.014249] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:40.529 [2024-07-25 09:21:53.087111] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:40.529 [2024-07-25 09:21:53.087116] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.094 09:21:53 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:41.094 09:21:53 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:41.094 09:21:53 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:41.354 Malloc0 00:06:41.354 09:21:53 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:41.354 Malloc1 00:06:41.354 09:21:54 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:41.354 09:21:54 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:41.354 09:21:54 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:41.354 09:21:54 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:41.354 09:21:54 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:41.354 09:21:54 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:41.354 09:21:54 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:41.354 09:21:54 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:41.354 09:21:54 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:41.354 09:21:54 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:41.354 09:21:54 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:41.354 09:21:54 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:41.354 09:21:54 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:41.354 09:21:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:41.354 09:21:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:41.354 09:21:54 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:41.613 /dev/nbd0 00:06:41.613 09:21:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:41.613 09:21:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:41.613 09:21:54 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:41.613 09:21:54 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:41.613 09:21:54 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:41.613 09:21:54 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:41.613 09:21:54 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:41.613 09:21:54 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:41.613 09:21:54 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:41.613 09:21:54 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:41.613 09:21:54 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:41.613 1+0 records in 00:06:41.613 1+0 records out 00:06:41.613 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000191443 s, 21.4 MB/s 00:06:41.613 09:21:54 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:41.613 09:21:54 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:41.613 09:21:54 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:41.613 09:21:54 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:41.613 09:21:54 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:41.613 09:21:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:41.613 09:21:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:41.613 09:21:54 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:41.871 /dev/nbd1 00:06:41.871 09:21:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:41.871 09:21:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:41.871 09:21:54 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:41.871 09:21:54 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:41.871 09:21:54 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:41.871 09:21:54 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:41.871 09:21:54 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:41.871 09:21:54 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:41.871 09:21:54 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:41.871 09:21:54 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:41.871 09:21:54 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:41.871 1+0 records in 00:06:41.871 1+0 records out 00:06:41.871 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000213619 s, 19.2 MB/s 00:06:41.871 09:21:54 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:41.871 09:21:54 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:41.871 09:21:54 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:41.871 09:21:54 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:41.871 09:21:54 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:41.871 09:21:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:41.871 09:21:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:41.871 09:21:54 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:41.871 09:21:54 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:41.871 09:21:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:42.130 09:21:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:42.130 { 00:06:42.130 "nbd_device": "/dev/nbd0", 00:06:42.130 "bdev_name": "Malloc0" 00:06:42.130 }, 00:06:42.130 { 00:06:42.130 "nbd_device": "/dev/nbd1", 00:06:42.130 "bdev_name": "Malloc1" 00:06:42.130 } 00:06:42.130 ]' 00:06:42.130 09:21:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:42.130 { 00:06:42.130 "nbd_device": "/dev/nbd0", 00:06:42.130 "bdev_name": "Malloc0" 00:06:42.130 }, 00:06:42.130 { 00:06:42.130 "nbd_device": "/dev/nbd1", 00:06:42.130 "bdev_name": "Malloc1" 00:06:42.130 } 00:06:42.130 ]' 00:06:42.130 09:21:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:42.130 09:21:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:42.130 /dev/nbd1' 00:06:42.130 09:21:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:42.130 /dev/nbd1' 00:06:42.130 09:21:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:42.130 09:21:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:42.130 09:21:54 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:42.130 09:21:54 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:42.130 09:21:54 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:42.130 09:21:54 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:42.130 09:21:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:42.130 09:21:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:42.130 09:21:54 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:42.130 09:21:54 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:06:42.130 09:21:54 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:42.130 09:21:54 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:42.130 256+0 records in 00:06:42.130 256+0 records out 00:06:42.130 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0105334 s, 99.5 MB/s 00:06:42.130 09:21:54 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:42.130 09:21:54 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:42.130 256+0 records in 00:06:42.130 256+0 records out 00:06:42.130 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0143 s, 73.3 MB/s 00:06:42.130 09:21:54 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:42.130 09:21:54 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:42.130 256+0 records in 00:06:42.130 256+0 records out 00:06:42.130 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0152417 s, 68.8 MB/s 00:06:42.130 09:21:54 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:42.130 09:21:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:42.130 09:21:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:42.130 09:21:54 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:42.130 09:21:54 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:06:42.130 09:21:54 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:42.130 09:21:54 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:42.130 09:21:54 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:42.130 09:21:54 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:42.131 09:21:54 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:42.131 09:21:54 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:42.131 09:21:54 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:06:42.131 09:21:54 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:42.131 09:21:54 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:42.131 09:21:54 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:42.131 09:21:54 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:42.131 09:21:54 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:42.131 09:21:54 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:42.131 09:21:54 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:42.389 09:21:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:42.389 09:21:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:42.389 09:21:55 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:42.389 09:21:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:42.389 09:21:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:42.389 09:21:55 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:42.389 09:21:55 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:42.389 09:21:55 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:42.389 09:21:55 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:42.389 09:21:55 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:42.389 09:21:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:42.648 09:21:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:42.648 09:21:55 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:42.648 09:21:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:42.648 09:21:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:42.648 09:21:55 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:42.648 09:21:55 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:42.648 09:21:55 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:42.649 09:21:55 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:42.649 09:21:55 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:42.649 09:21:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:42.649 09:21:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:42.649 09:21:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:42.649 09:21:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:42.649 09:21:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:42.649 09:21:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:42.649 09:21:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:42.649 09:21:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:42.649 09:21:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:42.649 09:21:55 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:42.649 09:21:55 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:42.649 09:21:55 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:42.649 09:21:55 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:42.649 09:21:55 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:42.908 09:21:55 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:43.167 [2024-07-25 09:21:55.797874] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:43.167 [2024-07-25 09:21:55.866750] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:43.167 [2024-07-25 09:21:55.866752] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.167 [2024-07-25 09:21:55.905413] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:43.167 [2024-07-25 09:21:55.905454] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:46.455 09:21:58 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:46.455 09:21:58 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:46.455 spdk_app_start Round 1 00:06:46.455 09:21:58 event.app_repeat -- event/event.sh@25 -- # waitforlisten 408871 /var/tmp/spdk-nbd.sock 00:06:46.455 09:21:58 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 408871 ']' 00:06:46.455 09:21:58 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:46.455 09:21:58 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:46.455 09:21:58 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:46.455 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:46.455 09:21:58 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:46.455 09:21:58 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:46.455 09:21:58 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:46.455 09:21:58 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:46.455 09:21:58 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:46.455 Malloc0 00:06:46.455 09:21:58 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:46.455 Malloc1 00:06:46.455 09:21:59 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:46.455 09:21:59 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:46.455 09:21:59 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:46.455 09:21:59 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:46.455 09:21:59 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:46.455 09:21:59 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:46.455 09:21:59 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:46.455 09:21:59 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:46.455 09:21:59 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:46.455 09:21:59 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:46.455 09:21:59 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:46.455 09:21:59 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:46.455 09:21:59 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:46.455 09:21:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:46.455 09:21:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:46.455 09:21:59 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:46.714 /dev/nbd0 00:06:46.714 09:21:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:46.714 09:21:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:46.714 09:21:59 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:46.714 09:21:59 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:46.714 09:21:59 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:46.714 09:21:59 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:46.714 09:21:59 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:46.714 09:21:59 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:46.714 09:21:59 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:46.714 09:21:59 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:46.714 09:21:59 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:46.714 1+0 records in 00:06:46.714 1+0 records out 00:06:46.714 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000225682 s, 18.1 MB/s 00:06:46.714 09:21:59 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:46.714 09:21:59 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:46.714 09:21:59 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:46.714 09:21:59 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:46.714 09:21:59 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:46.714 09:21:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:46.714 09:21:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:46.714 09:21:59 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:46.714 /dev/nbd1 00:06:46.714 09:21:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:46.714 09:21:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:46.714 09:21:59 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:46.714 09:21:59 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:46.714 09:21:59 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:46.714 09:21:59 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:46.714 09:21:59 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:46.714 09:21:59 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:46.714 09:21:59 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:46.714 09:21:59 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:46.714 09:21:59 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:46.714 1+0 records in 00:06:46.714 1+0 records out 00:06:46.714 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000153351 s, 26.7 MB/s 00:06:46.973 09:21:59 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:46.973 09:21:59 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:46.973 09:21:59 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:46.973 09:21:59 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:46.973 09:21:59 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:46.973 09:21:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:46.973 09:21:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:46.973 09:21:59 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:46.973 09:21:59 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:46.973 09:21:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:46.973 09:21:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:46.973 { 00:06:46.973 "nbd_device": "/dev/nbd0", 00:06:46.973 "bdev_name": "Malloc0" 00:06:46.973 }, 00:06:46.973 { 00:06:46.973 "nbd_device": "/dev/nbd1", 00:06:46.973 "bdev_name": "Malloc1" 00:06:46.973 } 00:06:46.973 ]' 00:06:46.973 09:21:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:46.973 { 00:06:46.973 "nbd_device": "/dev/nbd0", 00:06:46.973 "bdev_name": "Malloc0" 00:06:46.973 }, 00:06:46.973 { 00:06:46.973 "nbd_device": "/dev/nbd1", 00:06:46.973 "bdev_name": "Malloc1" 00:06:46.973 } 00:06:46.973 ]' 00:06:46.974 09:21:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:46.974 09:21:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:46.974 /dev/nbd1' 00:06:46.974 09:21:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:46.974 /dev/nbd1' 00:06:46.974 09:21:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:46.974 09:21:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:46.974 09:21:59 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:46.974 09:21:59 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:46.974 09:21:59 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:46.974 09:21:59 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:46.974 09:21:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:46.974 09:21:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:46.974 09:21:59 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:46.974 09:21:59 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:06:46.974 09:21:59 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:46.974 09:21:59 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:46.974 256+0 records in 00:06:46.974 256+0 records out 00:06:46.974 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0103784 s, 101 MB/s 00:06:46.974 09:21:59 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:46.974 09:21:59 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:47.233 256+0 records in 00:06:47.233 256+0 records out 00:06:47.233 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0150602 s, 69.6 MB/s 00:06:47.233 09:21:59 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:47.233 09:21:59 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:47.233 256+0 records in 00:06:47.233 256+0 records out 00:06:47.233 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0152259 s, 68.9 MB/s 00:06:47.233 09:21:59 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:47.233 09:21:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:47.233 09:21:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:47.233 09:21:59 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:47.233 09:21:59 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:06:47.233 09:21:59 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:47.233 09:21:59 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:47.233 09:21:59 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:47.233 09:21:59 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:47.233 09:21:59 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:47.233 09:21:59 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:47.233 09:21:59 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:06:47.233 09:21:59 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:47.233 09:21:59 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:47.233 09:21:59 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:47.233 09:21:59 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:47.233 09:21:59 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:47.233 09:21:59 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:47.233 09:21:59 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:47.233 09:21:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:47.233 09:21:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:47.233 09:21:59 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:47.233 09:21:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:47.233 09:21:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:47.233 09:21:59 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:47.233 09:21:59 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:47.233 09:21:59 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:47.233 09:21:59 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:47.233 09:21:59 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:47.492 09:22:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:47.492 09:22:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:47.492 09:22:00 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:47.492 09:22:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:47.492 09:22:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:47.492 09:22:00 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:47.492 09:22:00 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:47.492 09:22:00 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:47.492 09:22:00 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:47.492 09:22:00 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:47.492 09:22:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:47.751 09:22:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:47.751 09:22:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:47.751 09:22:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:47.751 09:22:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:47.751 09:22:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:47.751 09:22:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:47.752 09:22:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:47.752 09:22:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:47.752 09:22:00 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:47.752 09:22:00 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:47.752 09:22:00 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:47.752 09:22:00 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:47.752 09:22:00 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:48.011 09:22:00 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:48.011 [2024-07-25 09:22:00.764315] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:48.269 [2024-07-25 09:22:00.834589] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:48.269 [2024-07-25 09:22:00.834590] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.269 [2024-07-25 09:22:00.874760] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:48.269 [2024-07-25 09:22:00.874801] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:50.804 09:22:03 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:50.804 09:22:03 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:50.804 spdk_app_start Round 2 00:06:50.804 09:22:03 event.app_repeat -- event/event.sh@25 -- # waitforlisten 408871 /var/tmp/spdk-nbd.sock 00:06:50.804 09:22:03 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 408871 ']' 00:06:50.804 09:22:03 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:50.804 09:22:03 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:50.804 09:22:03 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:50.805 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:50.805 09:22:03 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:50.805 09:22:03 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:51.063 09:22:03 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:51.064 09:22:03 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:51.064 09:22:03 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:51.323 Malloc0 00:06:51.323 09:22:03 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:51.323 Malloc1 00:06:51.323 09:22:04 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:51.323 09:22:04 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:51.323 09:22:04 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:51.323 09:22:04 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:51.323 09:22:04 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:51.323 09:22:04 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:51.323 09:22:04 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:51.323 09:22:04 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:51.323 09:22:04 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:51.323 09:22:04 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:51.323 09:22:04 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:51.323 09:22:04 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:51.323 09:22:04 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:51.323 09:22:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:51.323 09:22:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:51.323 09:22:04 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:51.582 /dev/nbd0 00:06:51.582 09:22:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:51.582 09:22:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:51.582 09:22:04 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:51.582 09:22:04 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:51.583 09:22:04 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:51.583 09:22:04 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:51.583 09:22:04 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:51.583 09:22:04 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:51.583 09:22:04 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:51.583 09:22:04 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:51.583 09:22:04 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:51.583 1+0 records in 00:06:51.583 1+0 records out 00:06:51.583 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000196518 s, 20.8 MB/s 00:06:51.583 09:22:04 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:51.583 09:22:04 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:51.583 09:22:04 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:51.583 09:22:04 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:51.583 09:22:04 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:51.583 09:22:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:51.583 09:22:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:51.583 09:22:04 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:51.842 /dev/nbd1 00:06:51.842 09:22:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:51.842 09:22:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:51.842 09:22:04 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:51.842 09:22:04 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:51.842 09:22:04 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:51.842 09:22:04 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:51.842 09:22:04 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:51.842 09:22:04 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:51.842 09:22:04 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:51.842 09:22:04 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:51.842 09:22:04 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:51.842 1+0 records in 00:06:51.842 1+0 records out 00:06:51.842 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000195227 s, 21.0 MB/s 00:06:51.842 09:22:04 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:51.842 09:22:04 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:51.842 09:22:04 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:51.842 09:22:04 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:51.842 09:22:04 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:51.842 09:22:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:51.842 09:22:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:51.842 09:22:04 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:51.842 09:22:04 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:51.842 09:22:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:52.102 09:22:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:52.102 { 00:06:52.102 "nbd_device": "/dev/nbd0", 00:06:52.102 "bdev_name": "Malloc0" 00:06:52.102 }, 00:06:52.102 { 00:06:52.102 "nbd_device": "/dev/nbd1", 00:06:52.102 "bdev_name": "Malloc1" 00:06:52.102 } 00:06:52.102 ]' 00:06:52.102 09:22:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:52.102 { 00:06:52.102 "nbd_device": "/dev/nbd0", 00:06:52.102 "bdev_name": "Malloc0" 00:06:52.102 }, 00:06:52.102 { 00:06:52.102 "nbd_device": "/dev/nbd1", 00:06:52.102 "bdev_name": "Malloc1" 00:06:52.102 } 00:06:52.102 ]' 00:06:52.102 09:22:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:52.102 09:22:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:52.102 /dev/nbd1' 00:06:52.102 09:22:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:52.102 /dev/nbd1' 00:06:52.102 09:22:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:52.102 09:22:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:52.102 09:22:04 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:52.102 09:22:04 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:52.102 09:22:04 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:52.102 09:22:04 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:52.102 09:22:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:52.102 09:22:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:52.102 09:22:04 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:52.102 09:22:04 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:06:52.102 09:22:04 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:52.102 09:22:04 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:52.102 256+0 records in 00:06:52.102 256+0 records out 00:06:52.102 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00336259 s, 312 MB/s 00:06:52.102 09:22:04 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:52.102 09:22:04 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:52.102 256+0 records in 00:06:52.102 256+0 records out 00:06:52.102 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0142347 s, 73.7 MB/s 00:06:52.102 09:22:04 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:52.102 09:22:04 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:52.102 256+0 records in 00:06:52.102 256+0 records out 00:06:52.102 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0149948 s, 69.9 MB/s 00:06:52.102 09:22:04 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:52.102 09:22:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:52.102 09:22:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:52.102 09:22:04 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:52.102 09:22:04 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:06:52.102 09:22:04 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:52.102 09:22:04 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:52.102 09:22:04 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:52.102 09:22:04 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:52.102 09:22:04 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:52.102 09:22:04 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:52.102 09:22:04 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:06:52.102 09:22:04 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:52.102 09:22:04 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:52.102 09:22:04 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:52.102 09:22:04 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:52.102 09:22:04 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:52.102 09:22:04 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:52.102 09:22:04 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:52.362 09:22:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:52.362 09:22:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:52.362 09:22:04 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:52.362 09:22:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:52.362 09:22:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:52.362 09:22:04 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:52.362 09:22:04 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:52.362 09:22:04 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:52.362 09:22:04 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:52.362 09:22:04 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:52.362 09:22:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:52.362 09:22:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:52.362 09:22:05 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:52.362 09:22:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:52.362 09:22:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:52.362 09:22:05 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:52.362 09:22:05 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:52.362 09:22:05 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:52.362 09:22:05 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:52.362 09:22:05 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:52.362 09:22:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:52.620 09:22:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:52.620 09:22:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:52.620 09:22:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:52.620 09:22:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:52.620 09:22:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:52.620 09:22:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:52.620 09:22:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:52.620 09:22:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:52.620 09:22:05 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:52.620 09:22:05 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:52.620 09:22:05 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:52.620 09:22:05 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:52.620 09:22:05 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:52.879 09:22:05 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:53.138 [2024-07-25 09:22:05.766364] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:53.138 [2024-07-25 09:22:05.836581] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:53.138 [2024-07-25 09:22:05.836584] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.138 [2024-07-25 09:22:05.876225] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:53.138 [2024-07-25 09:22:05.876267] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:56.429 09:22:08 event.app_repeat -- event/event.sh@38 -- # waitforlisten 408871 /var/tmp/spdk-nbd.sock 00:06:56.429 09:22:08 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 408871 ']' 00:06:56.429 09:22:08 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:56.429 09:22:08 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:56.429 09:22:08 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:56.429 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:56.429 09:22:08 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:56.429 09:22:08 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:56.429 09:22:08 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:56.429 09:22:08 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:56.429 09:22:08 event.app_repeat -- event/event.sh@39 -- # killprocess 408871 00:06:56.429 09:22:08 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 408871 ']' 00:06:56.429 09:22:08 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 408871 00:06:56.429 09:22:08 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:06:56.429 09:22:08 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:56.429 09:22:08 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 408871 00:06:56.429 09:22:08 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:56.429 09:22:08 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:56.429 09:22:08 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 408871' 00:06:56.429 killing process with pid 408871 00:06:56.429 09:22:08 event.app_repeat -- common/autotest_common.sh@969 -- # kill 408871 00:06:56.429 09:22:08 event.app_repeat -- common/autotest_common.sh@974 -- # wait 408871 00:06:56.429 spdk_app_start is called in Round 0. 00:06:56.429 Shutdown signal received, stop current app iteration 00:06:56.429 Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 reinitialization... 00:06:56.429 spdk_app_start is called in Round 1. 00:06:56.429 Shutdown signal received, stop current app iteration 00:06:56.429 Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 reinitialization... 00:06:56.429 spdk_app_start is called in Round 2. 00:06:56.429 Shutdown signal received, stop current app iteration 00:06:56.429 Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 reinitialization... 00:06:56.429 spdk_app_start is called in Round 3. 00:06:56.429 Shutdown signal received, stop current app iteration 00:06:56.429 09:22:08 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:56.429 09:22:08 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:56.429 00:06:56.429 real 0m16.043s 00:06:56.429 user 0m34.689s 00:06:56.429 sys 0m2.453s 00:06:56.429 09:22:08 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:56.429 09:22:08 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:56.429 ************************************ 00:06:56.429 END TEST app_repeat 00:06:56.429 ************************************ 00:06:56.429 09:22:09 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:56.429 09:22:09 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:56.429 09:22:09 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:56.429 09:22:09 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:56.429 09:22:09 event -- common/autotest_common.sh@10 -- # set +x 00:06:56.429 ************************************ 00:06:56.429 START TEST cpu_locks 00:06:56.429 ************************************ 00:06:56.429 09:22:09 event.cpu_locks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:56.429 * Looking for test storage... 00:06:56.429 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event 00:06:56.429 09:22:09 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:56.429 09:22:09 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:56.429 09:22:09 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:56.429 09:22:09 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:56.429 09:22:09 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:56.429 09:22:09 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:56.429 09:22:09 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:56.429 ************************************ 00:06:56.429 START TEST default_locks 00:06:56.429 ************************************ 00:06:56.429 09:22:09 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:06:56.429 09:22:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=411665 00:06:56.429 09:22:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 411665 00:06:56.429 09:22:09 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 411665 ']' 00:06:56.429 09:22:09 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:56.429 09:22:09 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:56.429 09:22:09 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:56.429 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:56.429 09:22:09 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:56.429 09:22:09 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:56.429 09:22:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:56.429 [2024-07-25 09:22:09.149364] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:56.429 [2024-07-25 09:22:09.149431] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid411665 ] 00:06:56.429 EAL: No free 2048 kB hugepages reported on node 1 00:06:56.429 [2024-07-25 09:22:09.205288] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.688 [2024-07-25 09:22:09.280175] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.256 09:22:09 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:57.256 09:22:09 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:06:57.256 09:22:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 411665 00:06:57.256 09:22:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 411665 00:06:57.256 09:22:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:57.514 lslocks: write error 00:06:57.514 09:22:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 411665 00:06:57.514 09:22:10 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 411665 ']' 00:06:57.514 09:22:10 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 411665 00:06:57.514 09:22:10 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:06:57.773 09:22:10 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:57.773 09:22:10 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 411665 00:06:57.773 09:22:10 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:57.773 09:22:10 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:57.773 09:22:10 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 411665' 00:06:57.773 killing process with pid 411665 00:06:57.773 09:22:10 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 411665 00:06:57.773 09:22:10 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 411665 00:06:58.032 09:22:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 411665 00:06:58.032 09:22:10 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:06:58.032 09:22:10 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 411665 00:06:58.032 09:22:10 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:58.032 09:22:10 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:58.032 09:22:10 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:58.032 09:22:10 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:58.032 09:22:10 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 411665 00:06:58.032 09:22:10 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 411665 ']' 00:06:58.032 09:22:10 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:58.032 09:22:10 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:58.032 09:22:10 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:58.032 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:58.032 09:22:10 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:58.032 09:22:10 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:58.032 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (411665) - No such process 00:06:58.032 ERROR: process (pid: 411665) is no longer running 00:06:58.032 09:22:10 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:58.032 09:22:10 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:06:58.032 09:22:10 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:06:58.032 09:22:10 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:58.033 09:22:10 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:58.033 09:22:10 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:58.033 09:22:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:58.033 09:22:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:58.033 09:22:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:58.033 09:22:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:58.033 00:06:58.033 real 0m1.537s 00:06:58.033 user 0m1.618s 00:06:58.033 sys 0m0.496s 00:06:58.033 09:22:10 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:58.033 09:22:10 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:58.033 ************************************ 00:06:58.033 END TEST default_locks 00:06:58.033 ************************************ 00:06:58.033 09:22:10 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:58.033 09:22:10 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:58.033 09:22:10 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:58.033 09:22:10 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:58.033 ************************************ 00:06:58.033 START TEST default_locks_via_rpc 00:06:58.033 ************************************ 00:06:58.033 09:22:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:06:58.033 09:22:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=411918 00:06:58.033 09:22:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 411918 00:06:58.033 09:22:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:58.033 09:22:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 411918 ']' 00:06:58.033 09:22:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:58.033 09:22:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:58.033 09:22:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:58.033 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:58.033 09:22:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:58.033 09:22:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:58.033 [2024-07-25 09:22:10.749487] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:58.033 [2024-07-25 09:22:10.749555] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid411918 ] 00:06:58.033 EAL: No free 2048 kB hugepages reported on node 1 00:06:58.033 [2024-07-25 09:22:10.807018] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.292 [2024-07-25 09:22:10.877839] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.292 09:22:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:58.292 09:22:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:58.292 09:22:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:58.292 09:22:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:58.292 09:22:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:58.292 09:22:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:58.292 09:22:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:58.292 09:22:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:58.292 09:22:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:58.292 09:22:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:58.292 09:22:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:58.292 09:22:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:58.292 09:22:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:58.292 09:22:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:58.292 09:22:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 411918 00:06:58.292 09:22:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 411918 00:06:58.292 09:22:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:58.858 09:22:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 411918 00:06:58.858 09:22:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 411918 ']' 00:06:58.858 09:22:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 411918 00:06:58.858 09:22:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:06:58.858 09:22:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:58.858 09:22:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 411918 00:06:58.858 09:22:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:58.858 09:22:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:58.858 09:22:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 411918' 00:06:58.858 killing process with pid 411918 00:06:58.858 09:22:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 411918 00:06:58.858 09:22:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 411918 00:06:59.213 00:06:59.213 real 0m1.113s 00:06:59.213 user 0m1.061s 00:06:59.213 sys 0m0.496s 00:06:59.213 09:22:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:59.213 09:22:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:59.213 ************************************ 00:06:59.213 END TEST default_locks_via_rpc 00:06:59.213 ************************************ 00:06:59.213 09:22:11 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:59.213 09:22:11 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:59.213 09:22:11 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:59.213 09:22:11 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:59.213 ************************************ 00:06:59.213 START TEST non_locking_app_on_locked_coremask 00:06:59.213 ************************************ 00:06:59.213 09:22:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:06:59.213 09:22:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=412158 00:06:59.213 09:22:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 412158 /var/tmp/spdk.sock 00:06:59.213 09:22:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 412158 ']' 00:06:59.213 09:22:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:59.213 09:22:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:59.213 09:22:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:59.214 09:22:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:59.214 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:59.214 09:22:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:59.214 09:22:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:59.214 [2024-07-25 09:22:11.926916] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:59.214 [2024-07-25 09:22:11.926990] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid412158 ] 00:06:59.214 EAL: No free 2048 kB hugepages reported on node 1 00:06:59.214 [2024-07-25 09:22:12.000486] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.514 [2024-07-25 09:22:12.077405] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.125 09:22:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:00.125 09:22:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:00.125 09:22:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=412345 00:07:00.125 09:22:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:07:00.125 09:22:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 412345 /var/tmp/spdk2.sock 00:07:00.125 09:22:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 412345 ']' 00:07:00.125 09:22:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:00.125 09:22:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:00.125 09:22:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:00.125 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:00.125 09:22:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:00.125 09:22:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:00.125 [2024-07-25 09:22:12.756970] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:00.125 [2024-07-25 09:22:12.757016] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid412345 ] 00:07:00.125 EAL: No free 2048 kB hugepages reported on node 1 00:07:00.125 [2024-07-25 09:22:12.828515] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:00.125 [2024-07-25 09:22:12.828542] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.397 [2024-07-25 09:22:12.988206] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.015 09:22:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:01.015 09:22:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:01.015 09:22:13 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 412158 00:07:01.015 09:22:13 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 412158 00:07:01.015 09:22:13 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:01.281 lslocks: write error 00:07:01.281 09:22:13 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 412158 00:07:01.281 09:22:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 412158 ']' 00:07:01.281 09:22:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 412158 00:07:01.281 09:22:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:01.281 09:22:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:01.281 09:22:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 412158 00:07:01.281 09:22:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:01.281 09:22:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:01.281 09:22:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 412158' 00:07:01.281 killing process with pid 412158 00:07:01.281 09:22:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 412158 00:07:01.281 09:22:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 412158 00:07:01.865 09:22:14 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 412345 00:07:01.865 09:22:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 412345 ']' 00:07:01.865 09:22:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 412345 00:07:01.865 09:22:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:01.865 09:22:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:01.865 09:22:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 412345 00:07:01.865 09:22:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:01.865 09:22:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:01.865 09:22:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 412345' 00:07:01.865 killing process with pid 412345 00:07:01.865 09:22:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 412345 00:07:01.865 09:22:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 412345 00:07:02.139 00:07:02.139 real 0m3.009s 00:07:02.139 user 0m3.209s 00:07:02.139 sys 0m0.833s 00:07:02.139 09:22:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:02.139 09:22:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:02.139 ************************************ 00:07:02.139 END TEST non_locking_app_on_locked_coremask 00:07:02.139 ************************************ 00:07:02.408 09:22:14 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:07:02.408 09:22:14 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:02.408 09:22:14 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:02.408 09:22:14 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:02.408 ************************************ 00:07:02.408 START TEST locking_app_on_unlocked_coremask 00:07:02.408 ************************************ 00:07:02.408 09:22:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:07:02.408 09:22:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=412642 00:07:02.408 09:22:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 412642 /var/tmp/spdk.sock 00:07:02.408 09:22:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:07:02.408 09:22:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 412642 ']' 00:07:02.408 09:22:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:02.408 09:22:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:02.408 09:22:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:02.408 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:02.408 09:22:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:02.408 09:22:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:02.408 [2024-07-25 09:22:15.002468] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:02.408 [2024-07-25 09:22:15.002539] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid412642 ] 00:07:02.408 EAL: No free 2048 kB hugepages reported on node 1 00:07:02.408 [2024-07-25 09:22:15.062495] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:02.408 [2024-07-25 09:22:15.062528] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.408 [2024-07-25 09:22:15.132772] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.035 09:22:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:03.035 09:22:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:03.035 09:22:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=412855 00:07:03.035 09:22:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 412855 /var/tmp/spdk2.sock 00:07:03.035 09:22:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:03.035 09:22:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 412855 ']' 00:07:03.035 09:22:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:03.035 09:22:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:03.035 09:22:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:03.035 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:03.035 09:22:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:03.035 09:22:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:03.316 [2024-07-25 09:22:15.834214] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:03.316 [2024-07-25 09:22:15.834287] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid412855 ] 00:07:03.316 EAL: No free 2048 kB hugepages reported on node 1 00:07:03.316 [2024-07-25 09:22:15.908616] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.316 [2024-07-25 09:22:16.057123] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.929 09:22:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:03.929 09:22:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:03.929 09:22:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 412855 00:07:03.929 09:22:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 412855 00:07:03.929 09:22:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:04.543 lslocks: write error 00:07:04.543 09:22:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 412642 00:07:04.543 09:22:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 412642 ']' 00:07:04.543 09:22:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 412642 00:07:04.544 09:22:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:04.544 09:22:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:04.544 09:22:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 412642 00:07:04.544 09:22:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:04.544 09:22:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:04.544 09:22:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 412642' 00:07:04.544 killing process with pid 412642 00:07:04.544 09:22:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 412642 00:07:04.544 09:22:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 412642 00:07:05.162 09:22:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 412855 00:07:05.162 09:22:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 412855 ']' 00:07:05.162 09:22:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 412855 00:07:05.162 09:22:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:05.162 09:22:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:05.162 09:22:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 412855 00:07:05.162 09:22:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:05.162 09:22:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:05.162 09:22:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 412855' 00:07:05.162 killing process with pid 412855 00:07:05.162 09:22:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 412855 00:07:05.162 09:22:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 412855 00:07:05.454 00:07:05.454 real 0m3.175s 00:07:05.454 user 0m3.375s 00:07:05.454 sys 0m0.920s 00:07:05.454 09:22:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:05.454 09:22:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:05.454 ************************************ 00:07:05.454 END TEST locking_app_on_unlocked_coremask 00:07:05.454 ************************************ 00:07:05.454 09:22:18 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:07:05.454 09:22:18 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:05.454 09:22:18 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:05.454 09:22:18 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:05.454 ************************************ 00:07:05.454 START TEST locking_app_on_locked_coremask 00:07:05.454 ************************************ 00:07:05.454 09:22:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:07:05.454 09:22:18 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=413328 00:07:05.454 09:22:18 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 413328 /var/tmp/spdk.sock 00:07:05.454 09:22:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 413328 ']' 00:07:05.454 09:22:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:05.454 09:22:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:05.454 09:22:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:05.454 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:05.454 09:22:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:05.454 09:22:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:05.454 09:22:18 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:05.454 [2024-07-25 09:22:18.239903] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:05.454 [2024-07-25 09:22:18.239972] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid413328 ] 00:07:05.727 EAL: No free 2048 kB hugepages reported on node 1 00:07:05.727 [2024-07-25 09:22:18.297970] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.727 [2024-07-25 09:22:18.377250] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.985 09:22:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:05.985 09:22:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:05.985 09:22:18 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=413334 00:07:05.985 09:22:18 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 413334 /var/tmp/spdk2.sock 00:07:05.985 09:22:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:07:05.985 09:22:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 413334 /var/tmp/spdk2.sock 00:07:05.985 09:22:18 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:05.985 09:22:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:07:05.985 09:22:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:05.985 09:22:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:07:05.985 09:22:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:05.985 09:22:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 413334 /var/tmp/spdk2.sock 00:07:05.985 09:22:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 413334 ']' 00:07:05.985 09:22:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:05.985 09:22:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:05.985 09:22:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:05.985 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:05.985 09:22:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:05.985 09:22:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:05.985 [2024-07-25 09:22:18.573129] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:05.985 [2024-07-25 09:22:18.573192] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid413334 ] 00:07:05.985 EAL: No free 2048 kB hugepages reported on node 1 00:07:05.985 [2024-07-25 09:22:18.645809] app.c: 772:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 413328 has claimed it. 00:07:05.985 [2024-07-25 09:22:18.645839] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:06.550 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (413334) - No such process 00:07:06.550 ERROR: process (pid: 413334) is no longer running 00:07:06.550 09:22:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:06.550 09:22:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:07:06.550 09:22:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:07:06.550 09:22:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:06.550 09:22:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:06.550 09:22:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:06.550 09:22:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 413328 00:07:06.550 09:22:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 413328 00:07:06.550 09:22:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:06.808 lslocks: write error 00:07:06.808 09:22:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 413328 00:07:06.808 09:22:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 413328 ']' 00:07:06.808 09:22:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 413328 00:07:06.808 09:22:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:06.808 09:22:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:06.808 09:22:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 413328 00:07:06.808 09:22:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:06.808 09:22:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:06.808 09:22:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 413328' 00:07:06.808 killing process with pid 413328 00:07:06.808 09:22:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 413328 00:07:06.808 09:22:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 413328 00:07:07.067 00:07:07.067 real 0m1.588s 00:07:07.067 user 0m1.666s 00:07:07.067 sys 0m0.495s 00:07:07.067 09:22:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:07.067 09:22:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:07.067 ************************************ 00:07:07.067 END TEST locking_app_on_locked_coremask 00:07:07.067 ************************************ 00:07:07.067 09:22:19 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:07.067 09:22:19 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:07.067 09:22:19 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:07.067 09:22:19 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:07.067 ************************************ 00:07:07.067 START TEST locking_overlapped_coremask 00:07:07.067 ************************************ 00:07:07.067 09:22:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:07:07.067 09:22:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=413580 00:07:07.067 09:22:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 413580 /var/tmp/spdk.sock 00:07:07.067 09:22:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:07:07.067 09:22:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 413580 ']' 00:07:07.067 09:22:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:07.067 09:22:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:07.067 09:22:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:07.067 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:07.067 09:22:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:07.067 09:22:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:07.325 [2024-07-25 09:22:19.890710] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:07.325 [2024-07-25 09:22:19.890778] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid413580 ] 00:07:07.325 EAL: No free 2048 kB hugepages reported on node 1 00:07:07.326 [2024-07-25 09:22:19.947661] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:07.326 [2024-07-25 09:22:20.021363] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:07.326 [2024-07-25 09:22:20.023556] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:07.326 [2024-07-25 09:22:20.023568] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.261 09:22:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:08.261 09:22:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:08.261 09:22:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=413710 00:07:08.261 09:22:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 413710 /var/tmp/spdk2.sock 00:07:08.261 09:22:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:08.261 09:22:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:07:08.261 09:22:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 413710 /var/tmp/spdk2.sock 00:07:08.261 09:22:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:07:08.261 09:22:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:08.261 09:22:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:07:08.261 09:22:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:08.261 09:22:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 413710 /var/tmp/spdk2.sock 00:07:08.261 09:22:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 413710 ']' 00:07:08.261 09:22:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:08.261 09:22:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:08.261 09:22:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:08.261 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:08.261 09:22:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:08.261 09:22:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:08.261 [2024-07-25 09:22:20.747912] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:08.261 [2024-07-25 09:22:20.747989] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid413710 ] 00:07:08.261 EAL: No free 2048 kB hugepages reported on node 1 00:07:08.261 [2024-07-25 09:22:20.831152] app.c: 772:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 413580 has claimed it. 00:07:08.261 [2024-07-25 09:22:20.831191] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:08.828 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (413710) - No such process 00:07:08.828 ERROR: process (pid: 413710) is no longer running 00:07:08.828 09:22:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:08.828 09:22:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:07:08.828 09:22:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:07:08.828 09:22:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:08.828 09:22:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:08.828 09:22:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:08.828 09:22:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:08.828 09:22:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:08.828 09:22:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:08.828 09:22:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:08.828 09:22:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 413580 00:07:08.828 09:22:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 413580 ']' 00:07:08.828 09:22:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 413580 00:07:08.828 09:22:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:07:08.828 09:22:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:08.828 09:22:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 413580 00:07:08.828 09:22:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:08.828 09:22:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:08.828 09:22:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 413580' 00:07:08.828 killing process with pid 413580 00:07:08.828 09:22:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 413580 00:07:08.828 09:22:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 413580 00:07:09.087 00:07:09.087 real 0m1.894s 00:07:09.087 user 0m5.431s 00:07:09.087 sys 0m0.395s 00:07:09.087 09:22:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:09.087 09:22:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:09.087 ************************************ 00:07:09.087 END TEST locking_overlapped_coremask 00:07:09.087 ************************************ 00:07:09.087 09:22:21 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:09.087 09:22:21 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:09.087 09:22:21 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:09.087 09:22:21 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:09.087 ************************************ 00:07:09.087 START TEST locking_overlapped_coremask_via_rpc 00:07:09.087 ************************************ 00:07:09.087 09:22:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:07:09.087 09:22:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=413843 00:07:09.087 09:22:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 413843 /var/tmp/spdk.sock 00:07:09.087 09:22:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:09.087 09:22:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 413843 ']' 00:07:09.087 09:22:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:09.087 09:22:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:09.087 09:22:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:09.087 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:09.087 09:22:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:09.087 09:22:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:09.087 [2024-07-25 09:22:21.851322] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:09.087 [2024-07-25 09:22:21.851390] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid413843 ] 00:07:09.087 EAL: No free 2048 kB hugepages reported on node 1 00:07:09.346 [2024-07-25 09:22:21.912055] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:09.346 [2024-07-25 09:22:21.912090] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:09.346 [2024-07-25 09:22:21.985292] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:09.346 [2024-07-25 09:22:21.985391] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:09.346 [2024-07-25 09:22:21.985392] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.604 09:22:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:09.604 09:22:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:09.604 09:22:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=414041 00:07:09.604 09:22:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 414041 /var/tmp/spdk2.sock 00:07:09.604 09:22:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:09.604 09:22:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 414041 ']' 00:07:09.604 09:22:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:09.604 09:22:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:09.604 09:22:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:09.604 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:09.604 09:22:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:09.605 09:22:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:09.605 [2024-07-25 09:22:22.200842] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:09.605 [2024-07-25 09:22:22.200904] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid414041 ] 00:07:09.605 EAL: No free 2048 kB hugepages reported on node 1 00:07:09.605 [2024-07-25 09:22:22.273785] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:09.605 [2024-07-25 09:22:22.273813] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:09.863 [2024-07-25 09:22:22.432050] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:09.863 [2024-07-25 09:22:22.432163] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:09.863 [2024-07-25 09:22:22.432164] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:07:10.430 09:22:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:10.430 09:22:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:10.430 09:22:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:10.430 09:22:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.430 09:22:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:10.430 09:22:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.430 09:22:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:10.430 09:22:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:07:10.430 09:22:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:10.430 09:22:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:07:10.430 09:22:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:10.430 09:22:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:07:10.430 09:22:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:10.430 09:22:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:10.430 09:22:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.430 09:22:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:10.430 [2024-07-25 09:22:23.045124] app.c: 772:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 413843 has claimed it. 00:07:10.430 request: 00:07:10.430 { 00:07:10.430 "method": "framework_enable_cpumask_locks", 00:07:10.430 "req_id": 1 00:07:10.430 } 00:07:10.430 Got JSON-RPC error response 00:07:10.430 response: 00:07:10.430 { 00:07:10.430 "code": -32603, 00:07:10.430 "message": "Failed to claim CPU core: 2" 00:07:10.430 } 00:07:10.430 09:22:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:07:10.430 09:22:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:07:10.430 09:22:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:10.430 09:22:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:10.430 09:22:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:10.430 09:22:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 413843 /var/tmp/spdk.sock 00:07:10.430 09:22:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 413843 ']' 00:07:10.430 09:22:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:10.430 09:22:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:10.430 09:22:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:10.430 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:10.430 09:22:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:10.430 09:22:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:10.430 09:22:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:10.430 09:22:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:10.430 09:22:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 414041 /var/tmp/spdk2.sock 00:07:10.430 09:22:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 414041 ']' 00:07:10.430 09:22:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:10.430 09:22:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:10.430 09:22:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:10.430 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:10.430 09:22:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:10.430 09:22:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:10.689 09:22:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:10.689 09:22:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:10.689 09:22:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:10.689 09:22:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:10.689 09:22:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:10.689 09:22:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:10.689 00:07:10.689 real 0m1.595s 00:07:10.689 user 0m0.726s 00:07:10.689 sys 0m0.144s 00:07:10.689 09:22:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:10.689 09:22:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:10.689 ************************************ 00:07:10.689 END TEST locking_overlapped_coremask_via_rpc 00:07:10.689 ************************************ 00:07:10.689 09:22:23 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:07:10.689 09:22:23 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 413843 ]] 00:07:10.689 09:22:23 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 413843 00:07:10.689 09:22:23 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 413843 ']' 00:07:10.689 09:22:23 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 413843 00:07:10.689 09:22:23 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:07:10.689 09:22:23 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:10.689 09:22:23 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 413843 00:07:10.689 09:22:23 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:10.689 09:22:23 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:10.689 09:22:23 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 413843' 00:07:10.689 killing process with pid 413843 00:07:10.689 09:22:23 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 413843 00:07:10.689 09:22:23 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 413843 00:07:11.256 09:22:23 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 414041 ]] 00:07:11.256 09:22:23 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 414041 00:07:11.256 09:22:23 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 414041 ']' 00:07:11.256 09:22:23 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 414041 00:07:11.256 09:22:23 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:07:11.256 09:22:23 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:11.256 09:22:23 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 414041 00:07:11.256 09:22:23 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:07:11.256 09:22:23 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:07:11.256 09:22:23 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 414041' 00:07:11.256 killing process with pid 414041 00:07:11.256 09:22:23 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 414041 00:07:11.256 09:22:23 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 414041 00:07:11.515 09:22:24 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:11.515 09:22:24 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:07:11.515 09:22:24 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 413843 ]] 00:07:11.515 09:22:24 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 413843 00:07:11.515 09:22:24 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 413843 ']' 00:07:11.515 09:22:24 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 413843 00:07:11.515 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (413843) - No such process 00:07:11.515 09:22:24 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 413843 is not found' 00:07:11.515 Process with pid 413843 is not found 00:07:11.515 09:22:24 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 414041 ]] 00:07:11.515 09:22:24 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 414041 00:07:11.515 09:22:24 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 414041 ']' 00:07:11.515 09:22:24 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 414041 00:07:11.515 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (414041) - No such process 00:07:11.515 09:22:24 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 414041 is not found' 00:07:11.515 Process with pid 414041 is not found 00:07:11.515 09:22:24 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:11.515 00:07:11.515 real 0m15.118s 00:07:11.515 user 0m25.999s 00:07:11.515 sys 0m4.646s 00:07:11.515 09:22:24 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:11.515 09:22:24 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:11.515 ************************************ 00:07:11.515 END TEST cpu_locks 00:07:11.515 ************************************ 00:07:11.515 00:07:11.515 real 0m40.012s 00:07:11.515 user 1m16.573s 00:07:11.515 sys 0m7.998s 00:07:11.515 09:22:24 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:11.515 09:22:24 event -- common/autotest_common.sh@10 -- # set +x 00:07:11.515 ************************************ 00:07:11.515 END TEST event 00:07:11.515 ************************************ 00:07:11.515 09:22:24 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/thread.sh 00:07:11.515 09:22:24 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:11.515 09:22:24 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:11.515 09:22:24 -- common/autotest_common.sh@10 -- # set +x 00:07:11.515 ************************************ 00:07:11.515 START TEST thread 00:07:11.515 ************************************ 00:07:11.515 09:22:24 thread -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/thread.sh 00:07:11.774 * Looking for test storage... 00:07:11.774 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread 00:07:11.774 09:22:24 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:11.774 09:22:24 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:07:11.774 09:22:24 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:11.774 09:22:24 thread -- common/autotest_common.sh@10 -- # set +x 00:07:11.774 ************************************ 00:07:11.774 START TEST thread_poller_perf 00:07:11.774 ************************************ 00:07:11.775 09:22:24 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:11.775 [2024-07-25 09:22:24.375658] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:11.775 [2024-07-25 09:22:24.375730] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid414367 ] 00:07:11.775 EAL: No free 2048 kB hugepages reported on node 1 00:07:11.775 [2024-07-25 09:22:24.436430] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.775 [2024-07-25 09:22:24.508519] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.775 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:13.148 ====================================== 00:07:13.148 busy:2104369676 (cyc) 00:07:13.148 total_run_count: 829000 00:07:13.148 tsc_hz: 2100000000 (cyc) 00:07:13.148 ====================================== 00:07:13.148 poller_cost: 2538 (cyc), 1208 (nsec) 00:07:13.148 00:07:13.148 real 0m1.217s 00:07:13.148 user 0m1.137s 00:07:13.148 sys 0m0.076s 00:07:13.148 09:22:25 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:13.148 09:22:25 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:13.148 ************************************ 00:07:13.148 END TEST thread_poller_perf 00:07:13.148 ************************************ 00:07:13.148 09:22:25 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:13.148 09:22:25 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:07:13.148 09:22:25 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:13.148 09:22:25 thread -- common/autotest_common.sh@10 -- # set +x 00:07:13.148 ************************************ 00:07:13.148 START TEST thread_poller_perf 00:07:13.148 ************************************ 00:07:13.148 09:22:25 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:13.148 [2024-07-25 09:22:25.656107] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:13.148 [2024-07-25 09:22:25.656209] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid414601 ] 00:07:13.148 EAL: No free 2048 kB hugepages reported on node 1 00:07:13.148 [2024-07-25 09:22:25.717586] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.148 [2024-07-25 09:22:25.789178] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.148 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:14.083 ====================================== 00:07:14.083 busy:2100917476 (cyc) 00:07:14.083 total_run_count: 13677000 00:07:14.083 tsc_hz: 2100000000 (cyc) 00:07:14.083 ====================================== 00:07:14.083 poller_cost: 153 (cyc), 72 (nsec) 00:07:14.083 00:07:14.083 real 0m1.213s 00:07:14.083 user 0m1.134s 00:07:14.083 sys 0m0.075s 00:07:14.083 09:22:26 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:14.083 09:22:26 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:14.083 ************************************ 00:07:14.083 END TEST thread_poller_perf 00:07:14.083 ************************************ 00:07:14.083 09:22:26 thread -- thread/thread.sh@17 -- # [[ n != \y ]] 00:07:14.083 09:22:26 thread -- thread/thread.sh@18 -- # run_test thread_spdk_lock /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/lock/spdk_lock 00:07:14.083 09:22:26 thread -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:14.083 09:22:26 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:14.083 09:22:26 thread -- common/autotest_common.sh@10 -- # set +x 00:07:14.342 ************************************ 00:07:14.342 START TEST thread_spdk_lock 00:07:14.342 ************************************ 00:07:14.343 09:22:26 thread.thread_spdk_lock -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/lock/spdk_lock 00:07:14.343 [2024-07-25 09:22:26.932620] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:14.343 [2024-07-25 09:22:26.932699] [ DPDK EAL parameters: spdk_lock_test --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid414839 ] 00:07:14.343 EAL: No free 2048 kB hugepages reported on node 1 00:07:14.343 [2024-07-25 09:22:26.992411] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:14.343 [2024-07-25 09:22:27.064643] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:14.343 [2024-07-25 09:22:27.064646] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.910 [2024-07-25 09:22:27.550911] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c: 965:thread_execute_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:07:14.910 [2024-07-25 09:22:27.550942] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:3083:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 2: Deadlock detected (thread != sspin->thread) 00:07:14.910 [2024-07-25 09:22:27.550950] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:3038:sspin_stacks_print: *ERROR*: spinlock 0x14d5bc0 00:07:14.910 [2024-07-25 09:22:27.551772] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c: 860:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:07:14.910 [2024-07-25 09:22:27.551876] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:1026:thread_execute_timed_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:07:14.910 [2024-07-25 09:22:27.551893] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c: 860:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:07:14.910 Starting test contend 00:07:14.910 Worker Delay Wait us Hold us Total us 00:07:14.910 0 3 176697 184342 361039 00:07:14.910 1 5 93677 284575 378253 00:07:14.910 PASS test contend 00:07:14.910 Starting test hold_by_poller 00:07:14.910 PASS test hold_by_poller 00:07:14.910 Starting test hold_by_message 00:07:14.910 PASS test hold_by_message 00:07:14.910 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/lock/spdk_lock summary: 00:07:14.910 100014 assertions passed 00:07:14.910 0 assertions failed 00:07:14.910 00:07:14.910 real 0m0.695s 00:07:14.910 user 0m1.099s 00:07:14.910 sys 0m0.079s 00:07:14.910 09:22:27 thread.thread_spdk_lock -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:14.910 09:22:27 thread.thread_spdk_lock -- common/autotest_common.sh@10 -- # set +x 00:07:14.910 ************************************ 00:07:14.910 END TEST thread_spdk_lock 00:07:14.910 ************************************ 00:07:14.910 00:07:14.910 real 0m3.392s 00:07:14.910 user 0m3.488s 00:07:14.910 sys 0m0.398s 00:07:14.910 09:22:27 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:14.910 09:22:27 thread -- common/autotest_common.sh@10 -- # set +x 00:07:14.910 ************************************ 00:07:14.910 END TEST thread 00:07:14.910 ************************************ 00:07:14.910 09:22:27 -- spdk/autotest.sh@184 -- # [[ 0 -eq 1 ]] 00:07:14.910 09:22:27 -- spdk/autotest.sh@189 -- # run_test app_cmdline /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/cmdline.sh 00:07:14.910 09:22:27 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:14.910 09:22:27 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:14.910 09:22:27 -- common/autotest_common.sh@10 -- # set +x 00:07:14.910 ************************************ 00:07:14.910 START TEST app_cmdline 00:07:14.910 ************************************ 00:07:14.910 09:22:27 app_cmdline -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/cmdline.sh 00:07:15.169 * Looking for test storage... 00:07:15.169 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:07:15.169 09:22:27 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:15.169 09:22:27 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=415108 00:07:15.169 09:22:27 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 415108 00:07:15.169 09:22:27 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:15.169 09:22:27 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 415108 ']' 00:07:15.169 09:22:27 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:15.169 09:22:27 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:15.169 09:22:27 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:15.169 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:15.169 09:22:27 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:15.169 09:22:27 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:15.169 [2024-07-25 09:22:27.799236] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:15.169 [2024-07-25 09:22:27.799307] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid415108 ] 00:07:15.169 EAL: No free 2048 kB hugepages reported on node 1 00:07:15.169 [2024-07-25 09:22:27.858310] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.169 [2024-07-25 09:22:27.932844] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.103 09:22:28 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:16.103 09:22:28 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:07:16.103 09:22:28 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:07:16.103 { 00:07:16.103 "version": "SPDK v24.09-pre git sha1 704257090", 00:07:16.103 "fields": { 00:07:16.103 "major": 24, 00:07:16.103 "minor": 9, 00:07:16.103 "patch": 0, 00:07:16.103 "suffix": "-pre", 00:07:16.103 "commit": "704257090" 00:07:16.103 } 00:07:16.103 } 00:07:16.103 09:22:28 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:16.103 09:22:28 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:16.103 09:22:28 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:16.103 09:22:28 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:16.103 09:22:28 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:16.103 09:22:28 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:16.103 09:22:28 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:16.103 09:22:28 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:16.103 09:22:28 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:16.103 09:22:28 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:16.103 09:22:28 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:16.103 09:22:28 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:16.103 09:22:28 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:16.103 09:22:28 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:07:16.103 09:22:28 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:16.103 09:22:28 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:07:16.103 09:22:28 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:16.103 09:22:28 app_cmdline -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:07:16.103 09:22:28 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:16.103 09:22:28 app_cmdline -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:07:16.103 09:22:28 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:16.103 09:22:28 app_cmdline -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:07:16.103 09:22:28 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py ]] 00:07:16.103 09:22:28 app_cmdline -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:16.361 request: 00:07:16.361 { 00:07:16.361 "method": "env_dpdk_get_mem_stats", 00:07:16.361 "req_id": 1 00:07:16.361 } 00:07:16.361 Got JSON-RPC error response 00:07:16.361 response: 00:07:16.361 { 00:07:16.361 "code": -32601, 00:07:16.361 "message": "Method not found" 00:07:16.361 } 00:07:16.361 09:22:28 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:07:16.361 09:22:28 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:16.361 09:22:28 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:16.361 09:22:28 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:16.361 09:22:28 app_cmdline -- app/cmdline.sh@1 -- # killprocess 415108 00:07:16.361 09:22:28 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 415108 ']' 00:07:16.361 09:22:28 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 415108 00:07:16.361 09:22:28 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:07:16.361 09:22:28 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:16.361 09:22:28 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 415108 00:07:16.361 09:22:29 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:16.361 09:22:29 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:16.361 09:22:29 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 415108' 00:07:16.361 killing process with pid 415108 00:07:16.361 09:22:29 app_cmdline -- common/autotest_common.sh@969 -- # kill 415108 00:07:16.361 09:22:29 app_cmdline -- common/autotest_common.sh@974 -- # wait 415108 00:07:16.618 00:07:16.618 real 0m1.625s 00:07:16.618 user 0m1.947s 00:07:16.618 sys 0m0.409s 00:07:16.618 09:22:29 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:16.618 09:22:29 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:16.618 ************************************ 00:07:16.618 END TEST app_cmdline 00:07:16.618 ************************************ 00:07:16.618 09:22:29 -- spdk/autotest.sh@190 -- # run_test version /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/version.sh 00:07:16.618 09:22:29 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:16.618 09:22:29 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:16.618 09:22:29 -- common/autotest_common.sh@10 -- # set +x 00:07:16.618 ************************************ 00:07:16.618 START TEST version 00:07:16.618 ************************************ 00:07:16.618 09:22:29 version -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/version.sh 00:07:16.877 * Looking for test storage... 00:07:16.877 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:07:16.877 09:22:29 version -- app/version.sh@17 -- # get_header_version major 00:07:16.877 09:22:29 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:07:16.877 09:22:29 version -- app/version.sh@14 -- # cut -f2 00:07:16.877 09:22:29 version -- app/version.sh@14 -- # tr -d '"' 00:07:16.877 09:22:29 version -- app/version.sh@17 -- # major=24 00:07:16.877 09:22:29 version -- app/version.sh@18 -- # get_header_version minor 00:07:16.877 09:22:29 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:07:16.877 09:22:29 version -- app/version.sh@14 -- # cut -f2 00:07:16.877 09:22:29 version -- app/version.sh@14 -- # tr -d '"' 00:07:16.877 09:22:29 version -- app/version.sh@18 -- # minor=9 00:07:16.877 09:22:29 version -- app/version.sh@19 -- # get_header_version patch 00:07:16.877 09:22:29 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:07:16.877 09:22:29 version -- app/version.sh@14 -- # cut -f2 00:07:16.877 09:22:29 version -- app/version.sh@14 -- # tr -d '"' 00:07:16.877 09:22:29 version -- app/version.sh@19 -- # patch=0 00:07:16.877 09:22:29 version -- app/version.sh@20 -- # get_header_version suffix 00:07:16.877 09:22:29 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:07:16.877 09:22:29 version -- app/version.sh@14 -- # cut -f2 00:07:16.877 09:22:29 version -- app/version.sh@14 -- # tr -d '"' 00:07:16.877 09:22:29 version -- app/version.sh@20 -- # suffix=-pre 00:07:16.877 09:22:29 version -- app/version.sh@22 -- # version=24.9 00:07:16.877 09:22:29 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:16.877 09:22:29 version -- app/version.sh@28 -- # version=24.9rc0 00:07:16.877 09:22:29 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:07:16.877 09:22:29 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:16.877 09:22:29 version -- app/version.sh@30 -- # py_version=24.9rc0 00:07:16.877 09:22:29 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:07:16.877 00:07:16.877 real 0m0.145s 00:07:16.877 user 0m0.093s 00:07:16.877 sys 0m0.085s 00:07:16.877 09:22:29 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:16.877 09:22:29 version -- common/autotest_common.sh@10 -- # set +x 00:07:16.877 ************************************ 00:07:16.877 END TEST version 00:07:16.877 ************************************ 00:07:16.877 09:22:29 -- spdk/autotest.sh@192 -- # '[' 0 -eq 1 ']' 00:07:16.877 09:22:29 -- spdk/autotest.sh@202 -- # uname -s 00:07:16.877 09:22:29 -- spdk/autotest.sh@202 -- # [[ Linux == Linux ]] 00:07:16.877 09:22:29 -- spdk/autotest.sh@203 -- # [[ 0 -eq 1 ]] 00:07:16.877 09:22:29 -- spdk/autotest.sh@203 -- # [[ 0 -eq 1 ]] 00:07:16.877 09:22:29 -- spdk/autotest.sh@215 -- # '[' 0 -eq 1 ']' 00:07:16.877 09:22:29 -- spdk/autotest.sh@260 -- # '[' 0 -eq 1 ']' 00:07:16.877 09:22:29 -- spdk/autotest.sh@264 -- # timing_exit lib 00:07:16.877 09:22:29 -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:16.877 09:22:29 -- common/autotest_common.sh@10 -- # set +x 00:07:16.877 09:22:29 -- spdk/autotest.sh@266 -- # '[' 0 -eq 1 ']' 00:07:16.877 09:22:29 -- spdk/autotest.sh@274 -- # '[' 0 -eq 1 ']' 00:07:16.877 09:22:29 -- spdk/autotest.sh@283 -- # '[' 0 -eq 1 ']' 00:07:16.877 09:22:29 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:07:16.877 09:22:29 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:07:16.877 09:22:29 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:07:16.877 09:22:29 -- spdk/autotest.sh@325 -- # '[' 0 -eq 1 ']' 00:07:16.877 09:22:29 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:07:16.877 09:22:29 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:07:16.877 09:22:29 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:07:16.877 09:22:29 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:07:16.877 09:22:29 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:07:16.877 09:22:29 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:07:16.877 09:22:29 -- spdk/autotest.sh@360 -- # '[' 0 -eq 1 ']' 00:07:16.877 09:22:29 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:07:16.877 09:22:29 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:07:16.877 09:22:29 -- spdk/autotest.sh@375 -- # [[ 1 -eq 1 ]] 00:07:16.877 09:22:29 -- spdk/autotest.sh@376 -- # run_test llvm_fuzz /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm.sh 00:07:16.877 09:22:29 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:16.877 09:22:29 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:16.877 09:22:29 -- common/autotest_common.sh@10 -- # set +x 00:07:16.877 ************************************ 00:07:16.877 START TEST llvm_fuzz 00:07:16.877 ************************************ 00:07:16.877 09:22:29 llvm_fuzz -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm.sh 00:07:17.139 * Looking for test storage... 00:07:17.139 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz 00:07:17.139 09:22:29 llvm_fuzz -- fuzz/llvm.sh@11 -- # fuzzers=($(get_fuzzer_targets)) 00:07:17.139 09:22:29 llvm_fuzz -- fuzz/llvm.sh@11 -- # get_fuzzer_targets 00:07:17.139 09:22:29 llvm_fuzz -- common/autotest_common.sh@548 -- # fuzzers=() 00:07:17.139 09:22:29 llvm_fuzz -- common/autotest_common.sh@548 -- # local fuzzers 00:07:17.139 09:22:29 llvm_fuzz -- common/autotest_common.sh@550 -- # [[ -n '' ]] 00:07:17.139 09:22:29 llvm_fuzz -- common/autotest_common.sh@553 -- # fuzzers=("$rootdir/test/fuzz/llvm/"*) 00:07:17.139 09:22:29 llvm_fuzz -- common/autotest_common.sh@554 -- # fuzzers=("${fuzzers[@]##*/}") 00:07:17.139 09:22:29 llvm_fuzz -- common/autotest_common.sh@557 -- # echo 'common.sh llvm-gcov.sh nvmf vfio' 00:07:17.139 09:22:29 llvm_fuzz -- fuzz/llvm.sh@13 -- # llvm_out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm 00:07:17.139 09:22:29 llvm_fuzz -- fuzz/llvm.sh@15 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/coverage 00:07:17.139 09:22:29 llvm_fuzz -- fuzz/llvm.sh@56 -- # [[ 1 -eq 0 ]] 00:07:17.139 09:22:29 llvm_fuzz -- fuzz/llvm.sh@60 -- # for fuzzer in "${fuzzers[@]}" 00:07:17.139 09:22:29 llvm_fuzz -- fuzz/llvm.sh@61 -- # case "$fuzzer" in 00:07:17.139 09:22:29 llvm_fuzz -- fuzz/llvm.sh@60 -- # for fuzzer in "${fuzzers[@]}" 00:07:17.139 09:22:29 llvm_fuzz -- fuzz/llvm.sh@61 -- # case "$fuzzer" in 00:07:17.139 09:22:29 llvm_fuzz -- fuzz/llvm.sh@60 -- # for fuzzer in "${fuzzers[@]}" 00:07:17.139 09:22:29 llvm_fuzz -- fuzz/llvm.sh@61 -- # case "$fuzzer" in 00:07:17.139 09:22:29 llvm_fuzz -- fuzz/llvm.sh@62 -- # run_test nvmf_llvm_fuzz /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/run.sh 00:07:17.139 09:22:29 llvm_fuzz -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:17.139 09:22:29 llvm_fuzz -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:17.139 09:22:29 llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:07:17.139 ************************************ 00:07:17.139 START TEST nvmf_llvm_fuzz 00:07:17.139 ************************************ 00:07:17.139 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/run.sh 00:07:17.139 * Looking for test storage... 00:07:17.139 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:07:17.139 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@60 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/common.sh 00:07:17.139 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- setup/common.sh@6 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh 00:07:17.139 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:07:17.139 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@34 -- # set -e 00:07:17.139 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:07:17.139 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@36 -- # shopt -s extglob 00:07:17.139 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:07:17.139 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output ']' 00:07:17.139 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh ]] 00:07:17.139 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh 00:07:17.139 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:17.139 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:17.139 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:17.139 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:17.139 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:07:17.139 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:17.139 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:17.139 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:17.139 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:17.139 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:17.139 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:17.139 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:17.139 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:17.139 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:17.139 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:17.139 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:17.139 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:07:17.139 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:17.139 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:07:17.139 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:07:17.139 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:07:17.139 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@22 -- # CONFIG_CET=n 00:07:17.139 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:17.139 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:07:17.139 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:07:17.139 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:17.139 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:17.139 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:07:17.139 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:07:17.139 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:07:17.139 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:07:17.139 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:07:17.139 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:07:17.139 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB=/usr/lib64/clang/16/lib/libclang_rt.fuzzer_no_main-x86_64.a 00:07:17.139 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@35 -- # CONFIG_FUZZER=y 00:07:17.139 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:07:17.139 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:07:17.139 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:07:17.139 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:07:17.139 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:07:17.139 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:07:17.139 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:07:17.139 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:07:17.139 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:17.139 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:07:17.139 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:07:17.139 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:07:17.139 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:07:17.139 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:17.139 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:07:17.139 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:07:17.139 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:07:17.139 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:07:17.139 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:07:17.139 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:07:17.139 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:07:17.139 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:07:17.139 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:07:17.139 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:07:17.139 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:07:17.139 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:07:17.139 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:07:17.139 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:07:17.139 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:07:17.139 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:07:17.139 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@66 -- # CONFIG_SHARED=n 00:07:17.139 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:07:17.139 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:07:17.139 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:17.139 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@70 -- # CONFIG_FC=n 00:07:17.139 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:07:17.140 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:07:17.140 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:07:17.140 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:07:17.140 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:07:17.140 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:07:17.140 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:07:17.140 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:07:17.140 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:07:17.140 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:07:17.140 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:17.140 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:07:17.140 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@83 -- # CONFIG_URING=n 00:07:17.140 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:07:17.140 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:07:17.140 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:07:17.140 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:07:17.140 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:07:17.140 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:07:17.140 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:07:17.140 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:07:17.140 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:07:17.140 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:07:17.140 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:07:17.140 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:07:17.140 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:07:17.140 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:07:17.140 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/config.h ]] 00:07:17.140 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:07:17.140 #define SPDK_CONFIG_H 00:07:17.140 #define SPDK_CONFIG_APPS 1 00:07:17.140 #define SPDK_CONFIG_ARCH native 00:07:17.140 #undef SPDK_CONFIG_ASAN 00:07:17.140 #undef SPDK_CONFIG_AVAHI 00:07:17.140 #undef SPDK_CONFIG_CET 00:07:17.140 #define SPDK_CONFIG_COVERAGE 1 00:07:17.140 #define SPDK_CONFIG_CROSS_PREFIX 00:07:17.140 #undef SPDK_CONFIG_CRYPTO 00:07:17.140 #undef SPDK_CONFIG_CRYPTO_MLX5 00:07:17.140 #undef SPDK_CONFIG_CUSTOMOCF 00:07:17.140 #undef SPDK_CONFIG_DAOS 00:07:17.140 #define SPDK_CONFIG_DAOS_DIR 00:07:17.140 #define SPDK_CONFIG_DEBUG 1 00:07:17.140 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:07:17.140 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:07:17.140 #define SPDK_CONFIG_DPDK_INC_DIR 00:07:17.140 #define SPDK_CONFIG_DPDK_LIB_DIR 00:07:17.140 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:07:17.140 #undef SPDK_CONFIG_DPDK_UADK 00:07:17.140 #define SPDK_CONFIG_ENV /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:07:17.140 #define SPDK_CONFIG_EXAMPLES 1 00:07:17.140 #undef SPDK_CONFIG_FC 00:07:17.140 #define SPDK_CONFIG_FC_PATH 00:07:17.140 #define SPDK_CONFIG_FIO_PLUGIN 1 00:07:17.140 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:07:17.140 #undef SPDK_CONFIG_FUSE 00:07:17.140 #define SPDK_CONFIG_FUZZER 1 00:07:17.140 #define SPDK_CONFIG_FUZZER_LIB /usr/lib64/clang/16/lib/libclang_rt.fuzzer_no_main-x86_64.a 00:07:17.140 #undef SPDK_CONFIG_GOLANG 00:07:17.140 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:07:17.140 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:07:17.140 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:07:17.140 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:07:17.140 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:07:17.140 #undef SPDK_CONFIG_HAVE_LIBBSD 00:07:17.140 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:07:17.140 #define SPDK_CONFIG_IDXD 1 00:07:17.140 #define SPDK_CONFIG_IDXD_KERNEL 1 00:07:17.140 #undef SPDK_CONFIG_IPSEC_MB 00:07:17.140 #define SPDK_CONFIG_IPSEC_MB_DIR 00:07:17.140 #define SPDK_CONFIG_ISAL 1 00:07:17.140 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:07:17.140 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:07:17.140 #define SPDK_CONFIG_LIBDIR 00:07:17.140 #undef SPDK_CONFIG_LTO 00:07:17.140 #define SPDK_CONFIG_MAX_LCORES 128 00:07:17.140 #define SPDK_CONFIG_NVME_CUSE 1 00:07:17.140 #undef SPDK_CONFIG_OCF 00:07:17.140 #define SPDK_CONFIG_OCF_PATH 00:07:17.140 #define SPDK_CONFIG_OPENSSL_PATH 00:07:17.140 #undef SPDK_CONFIG_PGO_CAPTURE 00:07:17.140 #define SPDK_CONFIG_PGO_DIR 00:07:17.140 #undef SPDK_CONFIG_PGO_USE 00:07:17.140 #define SPDK_CONFIG_PREFIX /usr/local 00:07:17.140 #undef SPDK_CONFIG_RAID5F 00:07:17.140 #undef SPDK_CONFIG_RBD 00:07:17.140 #define SPDK_CONFIG_RDMA 1 00:07:17.140 #define SPDK_CONFIG_RDMA_PROV verbs 00:07:17.140 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:07:17.140 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:07:17.140 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:07:17.140 #undef SPDK_CONFIG_SHARED 00:07:17.140 #undef SPDK_CONFIG_SMA 00:07:17.140 #define SPDK_CONFIG_TESTS 1 00:07:17.140 #undef SPDK_CONFIG_TSAN 00:07:17.140 #define SPDK_CONFIG_UBLK 1 00:07:17.140 #define SPDK_CONFIG_UBSAN 1 00:07:17.140 #undef SPDK_CONFIG_UNIT_TESTS 00:07:17.140 #undef SPDK_CONFIG_URING 00:07:17.140 #define SPDK_CONFIG_URING_PATH 00:07:17.140 #undef SPDK_CONFIG_URING_ZNS 00:07:17.140 #undef SPDK_CONFIG_USDT 00:07:17.140 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:07:17.140 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:07:17.140 #define SPDK_CONFIG_VFIO_USER 1 00:07:17.140 #define SPDK_CONFIG_VFIO_USER_DIR 00:07:17.140 #define SPDK_CONFIG_VHOST 1 00:07:17.140 #define SPDK_CONFIG_VIRTIO 1 00:07:17.140 #undef SPDK_CONFIG_VTUNE 00:07:17.140 #define SPDK_CONFIG_VTUNE_DIR 00:07:17.140 #define SPDK_CONFIG_WERROR 1 00:07:17.140 #define SPDK_CONFIG_WPDK_DIR 00:07:17.140 #undef SPDK_CONFIG_XNVME 00:07:17.140 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:07:17.140 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:07:17.140 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:07:17.140 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:17.140 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:17.140 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:17.140 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:17.140 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:17.140 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:17.140 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- paths/export.sh@5 -- # export PATH 00:07:17.140 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:17.140 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:07:17.140 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@6 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:07:17.140 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@6 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:07:17.140 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:07:17.140 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@7 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/../../../ 00:07:17.140 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:07:17.140 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@64 -- # TEST_TAG=N/A 00:07:17.141 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.run_test_name 00:07:17.141 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power 00:07:17.141 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@68 -- # uname -s 00:07:17.141 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@68 -- # PM_OS=Linux 00:07:17.141 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:07:17.141 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:07:17.141 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:07:17.141 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:07:17.141 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:07:17.141 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:07:17.141 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@76 -- # SUDO[0]= 00:07:17.141 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@76 -- # SUDO[1]='sudo -E' 00:07:17.141 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:07:17.141 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:07:17.141 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@81 -- # [[ Linux == Linux ]] 00:07:17.141 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:07:17.141 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:07:17.141 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:07:17.141 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:07:17.141 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power ]] 00:07:17.141 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@58 -- # : 0 00:07:17.141 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:07:17.141 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@62 -- # : 0 00:07:17.141 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:07:17.141 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@64 -- # : 0 00:07:17.141 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:07:17.141 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@66 -- # : 1 00:07:17.141 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:07:17.141 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@68 -- # : 0 00:07:17.141 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:07:17.141 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@70 -- # : 00:07:17.141 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:07:17.141 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@72 -- # : 0 00:07:17.141 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:07:17.141 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@74 -- # : 0 00:07:17.141 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:07:17.141 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@76 -- # : 0 00:07:17.141 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:07:17.141 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@78 -- # : 0 00:07:17.141 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:07:17.141 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@80 -- # : 0 00:07:17.141 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:07:17.141 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@82 -- # : 0 00:07:17.141 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:07:17.141 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@84 -- # : 0 00:07:17.141 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:07:17.141 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@86 -- # : 0 00:07:17.141 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:07:17.141 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@88 -- # : 0 00:07:17.141 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:07:17.141 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@90 -- # : 0 00:07:17.141 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:07:17.141 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@92 -- # : 0 00:07:17.141 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:07:17.141 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@94 -- # : 0 00:07:17.141 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:07:17.141 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@96 -- # : 0 00:07:17.141 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:07:17.141 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@98 -- # : 1 00:07:17.141 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:07:17.141 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@100 -- # : 1 00:07:17.141 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:07:17.141 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@102 -- # : rdma 00:07:17.141 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:07:17.141 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@104 -- # : 0 00:07:17.141 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:07:17.141 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@106 -- # : 0 00:07:17.141 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:07:17.141 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@108 -- # : 0 00:07:17.141 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:07:17.141 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@110 -- # : 0 00:07:17.141 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:07:17.141 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@112 -- # : 0 00:07:17.141 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:07:17.141 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@114 -- # : 0 00:07:17.141 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:07:17.141 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@116 -- # : 0 00:07:17.141 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:07:17.141 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@118 -- # : 0 00:07:17.141 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:07:17.141 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@120 -- # : 0 00:07:17.141 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:07:17.141 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@122 -- # : 1 00:07:17.141 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:07:17.141 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@124 -- # : 00:07:17.141 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:07:17.141 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@126 -- # : 0 00:07:17.141 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:07:17.141 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@128 -- # : 0 00:07:17.141 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:07:17.141 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@130 -- # : 0 00:07:17.141 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:07:17.141 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@132 -- # : 0 00:07:17.141 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:07:17.141 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@134 -- # : 0 00:07:17.141 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:07:17.141 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@136 -- # : 0 00:07:17.141 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:07:17.141 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@138 -- # : 00:07:17.141 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:07:17.141 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@140 -- # : true 00:07:17.141 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:07:17.141 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@142 -- # : 0 00:07:17.141 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:07:17.141 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@144 -- # : 0 00:07:17.141 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:07:17.141 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@146 -- # : 0 00:07:17.141 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:07:17.141 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@148 -- # : 0 00:07:17.141 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:07:17.141 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@150 -- # : 0 00:07:17.141 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:07:17.141 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@152 -- # : 0 00:07:17.141 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:07:17.141 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@154 -- # : 00:07:17.141 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:07:17.142 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@156 -- # : 0 00:07:17.142 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:07:17.142 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@158 -- # : 0 00:07:17.142 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:07:17.142 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@160 -- # : 0 00:07:17.142 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:07:17.142 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@162 -- # : 0 00:07:17.142 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:07:17.142 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@164 -- # : 0 00:07:17.142 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:07:17.142 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@166 -- # : 0 00:07:17.142 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:07:17.142 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@169 -- # : 00:07:17.142 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:07:17.142 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@171 -- # : 0 00:07:17.142 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:07:17.142 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@173 -- # : 0 00:07:17.142 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:07:17.142 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@177 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:07:17.142 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@177 -- # SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:07:17.142 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@178 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:07:17.142 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@178 -- # DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:07:17.142 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@179 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:17.142 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@179 -- # VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:17.142 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@180 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:17.142 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@180 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:17.142 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@183 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:07:17.142 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@183 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:07:17.142 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@187 -- # export PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:07:17.142 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@187 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:07:17.142 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@191 -- # export PYTHONDONTWRITEBYTECODE=1 00:07:17.142 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@191 -- # PYTHONDONTWRITEBYTECODE=1 00:07:17.142 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@195 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:17.142 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@195 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:17.142 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@196 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:17.142 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@196 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:17.142 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@200 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:07:17.142 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@201 -- # rm -rf /var/tmp/asan_suppression_file 00:07:17.142 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@202 -- # cat 00:07:17.142 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@238 -- # echo leak:libfuse3.so 00:07:17.142 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@240 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:17.142 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@240 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:17.142 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@242 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:17.142 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@242 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:17.142 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@244 -- # '[' -z /var/spdk/dependencies ']' 00:07:17.142 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@247 -- # export DEPENDENCY_DIR 00:07:17.142 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@251 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:07:17.142 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@251 -- # SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:07:17.142 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@252 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:07:17.142 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@252 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:07:17.142 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@255 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:17.142 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@255 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:17.142 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@256 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:17.142 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@256 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:17.142 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@258 -- # export AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:17.142 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@258 -- # AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:17.142 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@261 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:17.142 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@261 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:17.142 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@264 -- # '[' 0 -eq 0 ']' 00:07:17.142 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@265 -- # export valgrind= 00:07:17.142 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@265 -- # valgrind= 00:07:17.142 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@271 -- # uname -s 00:07:17.142 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@271 -- # '[' Linux = Linux ']' 00:07:17.142 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@272 -- # HUGEMEM=4096 00:07:17.142 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@273 -- # export CLEAR_HUGE=yes 00:07:17.142 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@273 -- # CLEAR_HUGE=yes 00:07:17.142 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@274 -- # [[ 0 -eq 1 ]] 00:07:17.142 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@274 -- # [[ 0 -eq 1 ]] 00:07:17.142 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@281 -- # MAKE=make 00:07:17.142 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@282 -- # MAKEFLAGS=-j88 00:07:17.142 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@298 -- # export HUGEMEM=4096 00:07:17.142 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@298 -- # HUGEMEM=4096 00:07:17.142 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@300 -- # NO_HUGE=() 00:07:17.142 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@301 -- # TEST_MODE= 00:07:17.142 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@320 -- # [[ -z 415485 ]] 00:07:17.142 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@320 -- # kill -0 415485 00:07:17.142 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:07:17.142 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@330 -- # [[ -v testdir ]] 00:07:17.142 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@332 -- # local requested_size=2147483648 00:07:17.142 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@333 -- # local mount target_dir 00:07:17.143 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@335 -- # local -A mounts fss sizes avails uses 00:07:17.143 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@336 -- # local source fs size avail mount use 00:07:17.143 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@338 -- # local storage_fallback storage_candidates 00:07:17.143 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@340 -- # mktemp -udt spdk.XXXXXX 00:07:17.143 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@340 -- # storage_fallback=/tmp/spdk.Pn0gmv 00:07:17.143 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@345 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:07:17.143 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@347 -- # [[ -n '' ]] 00:07:17.143 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@352 -- # [[ -n '' ]] 00:07:17.143 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@357 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf /tmp/spdk.Pn0gmv/tests/nvmf /tmp/spdk.Pn0gmv 00:07:17.143 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@360 -- # requested_size=2214592512 00:07:17.143 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:07:17.143 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@329 -- # df -T 00:07:17.143 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@329 -- # grep -v Filesystem 00:07:17.143 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@363 -- # mounts["$mount"]=spdk_devtmpfs 00:07:17.143 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@363 -- # fss["$mount"]=devtmpfs 00:07:17.143 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@364 -- # avails["$mount"]=67108864 00:07:17.143 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@364 -- # sizes["$mount"]=67108864 00:07:17.143 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@365 -- # uses["$mount"]=0 00:07:17.143 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:07:17.143 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@363 -- # mounts["$mount"]=/dev/pmem0 00:07:17.143 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@363 -- # fss["$mount"]=ext2 00:07:17.143 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@364 -- # avails["$mount"]=948682752 00:07:17.143 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@364 -- # sizes["$mount"]=5284429824 00:07:17.143 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@365 -- # uses["$mount"]=4335747072 00:07:17.143 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:07:17.143 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@363 -- # mounts["$mount"]=spdk_root 00:07:17.143 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@363 -- # fss["$mount"]=overlay 00:07:17.143 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@364 -- # avails["$mount"]=54566313984 00:07:17.143 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@364 -- # sizes["$mount"]=61742047232 00:07:17.143 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@365 -- # uses["$mount"]=7175733248 00:07:17.143 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:07:17.143 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:07:17.143 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:07:17.143 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@364 -- # avails["$mount"]=30867648512 00:07:17.143 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@364 -- # sizes["$mount"]=30871023616 00:07:17.143 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@365 -- # uses["$mount"]=3375104 00:07:17.143 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:07:17.143 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:07:17.143 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:07:17.143 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@364 -- # avails["$mount"]=12342374400 00:07:17.143 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@364 -- # sizes["$mount"]=12348411904 00:07:17.143 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@365 -- # uses["$mount"]=6037504 00:07:17.143 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:07:17.143 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:07:17.143 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:07:17.143 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@364 -- # avails["$mount"]=30870654976 00:07:17.143 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@364 -- # sizes["$mount"]=30871023616 00:07:17.143 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@365 -- # uses["$mount"]=368640 00:07:17.143 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:07:17.143 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:07:17.143 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:07:17.143 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@364 -- # avails["$mount"]=6174199808 00:07:17.143 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@364 -- # sizes["$mount"]=6174203904 00:07:17.143 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@365 -- # uses["$mount"]=4096 00:07:17.143 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:07:17.143 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@368 -- # printf '* Looking for test storage...\n' 00:07:17.143 * Looking for test storage... 00:07:17.143 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@370 -- # local target_space new_size 00:07:17.143 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@371 -- # for target_dir in "${storage_candidates[@]}" 00:07:17.143 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # df /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:07:17.143 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # awk '$1 !~ /Filesystem/{print $6}' 00:07:17.143 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # mount=/ 00:07:17.143 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@376 -- # target_space=54566313984 00:07:17.143 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@377 -- # (( target_space == 0 || target_space < requested_size )) 00:07:17.143 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@380 -- # (( target_space >= requested_size )) 00:07:17.143 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@382 -- # [[ overlay == tmpfs ]] 00:07:17.143 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@382 -- # [[ overlay == ramfs ]] 00:07:17.143 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@382 -- # [[ / == / ]] 00:07:17.143 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@383 -- # new_size=9390325760 00:07:17.143 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@384 -- # (( new_size * 100 / sizes[/] > 95 )) 00:07:17.143 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@389 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:07:17.143 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@389 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:07:17.143 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@390 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:07:17.143 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:07:17.143 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@391 -- # return 0 00:07:17.143 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1682 -- # set -o errtrace 00:07:17.143 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:07:17.143 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:07:17.143 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:07:17.143 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1687 -- # true 00:07:17.143 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1689 -- # xtrace_fd 00:07:17.143 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:07:17.143 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:07:17.143 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@27 -- # exec 00:07:17.143 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@29 -- # exec 00:07:17.143 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@31 -- # xtrace_restore 00:07:17.143 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:07:17.143 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:07:17.143 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@18 -- # set -x 00:07:17.143 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@61 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/../common.sh 00:07:17.144 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@8 -- # pids=() 00:07:17.144 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@63 -- # fuzzfile=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c 00:07:17.144 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@64 -- # grep -c '\.fn =' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c 00:07:17.403 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@64 -- # fuzz_num=25 00:07:17.403 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@65 -- # (( fuzz_num != 0 )) 00:07:17.403 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@67 -- # trap 'cleanup /tmp/llvm_fuzz* /var/tmp/suppress_nvmf_fuzz; exit 1' SIGINT SIGTERM EXIT 00:07:17.403 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@69 -- # mem_size=512 00:07:17.403 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@70 -- # [[ 1 -eq 1 ]] 00:07:17.403 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@71 -- # start_llvm_fuzz_short 25 1 00:07:17.403 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@69 -- # local fuzz_num=25 00:07:17.403 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@70 -- # local time=1 00:07:17.403 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i = 0 )) 00:07:17.403 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:17.403 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 0 1 0x1 00:07:17.403 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=0 00:07:17.403 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:17.403 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:17.403 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 00:07:17.403 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_0.conf 00:07:17.403 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:17.403 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:17.403 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 0 00:07:17.403 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4400 00:07:17.403 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 00:07:17.403 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4400' 00:07:17.403 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4400"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:17.403 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:17.403 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:17.403 09:22:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4400' -c /tmp/fuzz_json_0.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 -Z 0 00:07:17.403 [2024-07-25 09:22:29.980262] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:17.403 [2024-07-25 09:22:29.980348] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid415610 ] 00:07:17.403 EAL: No free 2048 kB hugepages reported on node 1 00:07:17.403 [2024-07-25 09:22:30.146099] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.661 [2024-07-25 09:22:30.211916] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.661 [2024-07-25 09:22:30.270292] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:17.661 [2024-07-25 09:22:30.286498] tcp.c:1007:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4400 *** 00:07:17.661 INFO: Running with entropic power schedule (0xFF, 100). 00:07:17.661 INFO: Seed: 2821989509 00:07:17.661 INFO: Loaded 1 modules (359061 inline 8-bit counters): 359061 [0x29c768c, 0x2a1f121), 00:07:17.661 INFO: Loaded 1 PC tables (359061 PCs): 359061 [0x2a1f128,0x2f99a78), 00:07:17.661 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 00:07:17.661 INFO: A corpus is not provided, starting from an empty corpus 00:07:17.661 #2 INITED exec/s: 0 rss: 63Mb 00:07:17.661 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:17.661 This may also happen if the target rejected all inputs we tried so far 00:07:17.661 [2024-07-25 09:22:30.335503] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:9f9f9f9f cdw10:9f9f9f9f cdw11:9f9f9f9f SGL TRANSPORT DATA BLOCK TRANSPORT 0x9f9f9f9f9f9f9f9f 00:07:17.661 [2024-07-25 09:22:30.335530] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.661 NEW_FUNC[1/700]: 0x483e80 in fuzz_admin_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:47 00:07:17.661 NEW_FUNC[2/700]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:17.661 #4 NEW cov: 11962 ft: 11953 corp: 2/110b lim: 320 exec/s: 0 rss: 71Mb L: 109/109 MS: 2 ChangeBit-InsertRepeatedBytes- 00:07:17.919 [2024-07-25 09:22:30.475919] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:9f9f9f9f cdw10:9f9f9f9f cdw11:9f9f9f9f SGL TRANSPORT DATA BLOCK TRANSPORT 0x9f9f9f9f9f9f9f9f 00:07:17.919 [2024-07-25 09:22:30.475952] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.919 #5 NEW cov: 12075 ft: 12436 corp: 3/219b lim: 320 exec/s: 0 rss: 72Mb L: 109/109 MS: 1 ChangeBinInt- 00:07:17.919 [2024-07-25 09:22:30.525952] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:9f9f9f9f cdw10:9f9f9f9f cdw11:9f9f9f9f SGL TRANSPORT DATA BLOCK TRANSPORT 0x9f9f9f9f9f9f9f9f 00:07:17.919 [2024-07-25 09:22:30.525976] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.919 #6 NEW cov: 12081 ft: 12785 corp: 4/341b lim: 320 exec/s: 0 rss: 72Mb L: 122/122 MS: 1 CopyPart- 00:07:17.919 [2024-07-25 09:22:30.566067] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:9f9f9f9f cdw10:9f9f9f9f cdw11:9f9f9f9f SGL TRANSPORT DATA BLOCK TRANSPORT 0x9f9f9f9f9f9f9f9f 00:07:17.919 [2024-07-25 09:22:30.566099] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.919 #7 NEW cov: 12166 ft: 13007 corp: 5/450b lim: 320 exec/s: 0 rss: 72Mb L: 109/122 MS: 1 ChangeBit- 00:07:17.919 [2024-07-25 09:22:30.616220] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:9f9f9f9f cdw10:9f9f9f9f cdw11:9f9f9f9f SGL TRANSPORT DATA BLOCK TRANSPORT 0x9f9f9f9f9f9f9f9f 00:07:17.919 [2024-07-25 09:22:30.616245] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.919 #8 NEW cov: 12166 ft: 13123 corp: 6/559b lim: 320 exec/s: 0 rss: 72Mb L: 109/122 MS: 1 ChangeByte- 00:07:17.919 [2024-07-25 09:22:30.656279] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:9f669f9f cdw10:9f9f9f9f cdw11:9f9f9f9f SGL TRANSPORT DATA BLOCK TRANSPORT 0x9f9f9f9f9f9f9f9f 00:07:17.919 [2024-07-25 09:22:30.656306] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.919 #9 NEW cov: 12166 ft: 13211 corp: 7/668b lim: 320 exec/s: 0 rss: 72Mb L: 109/122 MS: 1 ChangeBinInt- 00:07:17.919 [2024-07-25 09:22:30.696471] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:9f9f9f9f cdw10:9f9f9f9f cdw11:9f9f9f9f SGL TRANSPORT DATA BLOCK TRANSPORT 0x9f9f9f9f9f9f9f9f 00:07:17.919 [2024-07-25 09:22:30.696495] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.177 #15 NEW cov: 12166 ft: 13315 corp: 8/777b lim: 320 exec/s: 0 rss: 72Mb L: 109/122 MS: 1 ChangeBit- 00:07:18.177 [2024-07-25 09:22:30.746616] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:9f9f9f9f cdw10:9f9f9f9f cdw11:9f9f9f9f SGL TRANSPORT DATA BLOCK TRANSPORT 0x9f9f9f9f9f9f9f9f 00:07:18.177 [2024-07-25 09:22:30.746639] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.177 #16 NEW cov: 12166 ft: 13353 corp: 9/886b lim: 320 exec/s: 0 rss: 72Mb L: 109/122 MS: 1 ChangeBinInt- 00:07:18.177 [2024-07-25 09:22:30.796710] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:9f669f9f cdw10:9f9f9f9f cdw11:9f9f9f9f SGL TRANSPORT DATA BLOCK TRANSPORT 0x9f9f9f9f9f9f9f9f 00:07:18.177 [2024-07-25 09:22:30.796735] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.177 #17 NEW cov: 12166 ft: 13380 corp: 10/995b lim: 320 exec/s: 0 rss: 72Mb L: 109/122 MS: 1 CrossOver- 00:07:18.177 [2024-07-25 09:22:30.846881] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:9f9f9f9f cdw10:9f9f9f9f cdw11:9f9f9f9f SGL TRANSPORT DATA BLOCK TRANSPORT 0x9f9f9f9f9f9f9f9f 00:07:18.177 [2024-07-25 09:22:30.846905] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.177 #18 NEW cov: 12166 ft: 13510 corp: 11/1104b lim: 320 exec/s: 0 rss: 72Mb L: 109/122 MS: 1 ChangeByte- 00:07:18.177 [2024-07-25 09:22:30.886956] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:9f669f9f cdw10:9f9f9f9f cdw11:9f9f9f9f SGL TRANSPORT DATA BLOCK TRANSPORT 0x9f9f9f9f9f9f9f9f 00:07:18.177 [2024-07-25 09:22:30.886979] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.177 #19 NEW cov: 12166 ft: 13563 corp: 12/1213b lim: 320 exec/s: 0 rss: 72Mb L: 109/122 MS: 1 ChangeByte- 00:07:18.177 [2024-07-25 09:22:30.927136] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:9f669f9f cdw10:9f9f9f9f cdw11:9f9f9f9f SGL TRANSPORT DATA BLOCK TRANSPORT 0x9f9f9f9f9f9f9f9f 00:07:18.177 [2024-07-25 09:22:30.927160] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.177 #20 NEW cov: 12166 ft: 13600 corp: 13/1322b lim: 320 exec/s: 0 rss: 72Mb L: 109/122 MS: 1 ShuffleBytes- 00:07:18.177 [2024-07-25 09:22:30.977224] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:9f9f9f9f cdw10:9f9f9f9f cdw11:0000006d SGL TRANSPORT DATA BLOCK TRANSPORT 0x9f9f9f9f9f9f9f9f 00:07:18.177 [2024-07-25 09:22:30.977248] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.435 #21 NEW cov: 12166 ft: 13628 corp: 14/1431b lim: 320 exec/s: 0 rss: 72Mb L: 109/122 MS: 1 ChangeBinInt- 00:07:18.435 [2024-07-25 09:22:31.017339] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:9f9f9f9f cdw10:9f9f9f9f cdw11:9f9f9f9f SGL TRANSPORT DATA BLOCK TRANSPORT 0x9f9f9f9f9f9f9f9f 00:07:18.435 [2024-07-25 09:22:31.017362] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.435 #22 NEW cov: 12166 ft: 13659 corp: 15/1540b lim: 320 exec/s: 0 rss: 72Mb L: 109/122 MS: 1 ShuffleBytes- 00:07:18.435 [2024-07-25 09:22:31.057450] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:9f9f9f9f cdw10:9f9f9f9f cdw11:9f9f9f9f SGL TRANSPORT DATA BLOCK TRANSPORT 0x9f9f9f9f9f9f9f9f 00:07:18.435 [2024-07-25 09:22:31.057474] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.435 #23 NEW cov: 12166 ft: 13698 corp: 16/1625b lim: 320 exec/s: 0 rss: 72Mb L: 85/122 MS: 1 EraseBytes- 00:07:18.435 [2024-07-25 09:22:31.097595] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:9f9f9f9f cdw10:9f9f9f9f cdw11:9f9f9f9f SGL TRANSPORT DATA BLOCK TRANSPORT 0x9f9f9f9f9f9f9f9f 00:07:18.435 [2024-07-25 09:22:31.097619] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.435 #24 NEW cov: 12166 ft: 13709 corp: 17/1734b lim: 320 exec/s: 0 rss: 72Mb L: 109/122 MS: 1 ChangeBinInt- 00:07:18.435 [2024-07-25 09:22:31.147808] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:9f9f9f9f cdw10:9f9f9f9f cdw11:9f9f9f9f SGL TRANSPORT DATA BLOCK TRANSPORT 0x9f9f9f9f9f9f9f9f 00:07:18.435 [2024-07-25 09:22:31.147833] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.435 #25 NEW cov: 12166 ft: 13755 corp: 18/1857b lim: 320 exec/s: 0 rss: 72Mb L: 123/123 MS: 1 InsertByte- 00:07:18.435 [2024-07-25 09:22:31.198007] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:9f9f9f9f cdw10:9f9f9f9f cdw11:9f9f9f9f SGL TRANSPORT DATA BLOCK TRANSPORT 0x9f9f9f9f9f9f9f9f 00:07:18.435 [2024-07-25 09:22:31.198031] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.435 [2024-07-25 09:22:31.198086] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (9f) qid:0 cid:5 nsid:9f9f9f9f cdw10:9f9f9f9f cdw11:9f619f9f SGL TRANSPORT DATA BLOCK TRANSPORT 0x9f9f9f9f9f9f9f9f 00:07:18.435 [2024-07-25 09:22:31.198099] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:18.435 NEW_FUNC[1/2]: 0x139b500 in nvmf_tcp_req_set_cpl /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/tcp.c:2093 00:07:18.435 NEW_FUNC[2/2]: 0x1a8a050 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:07:18.435 #26 NEW cov: 12222 ft: 13989 corp: 19/2026b lim: 320 exec/s: 0 rss: 72Mb L: 169/169 MS: 1 CopyPart- 00:07:18.435 [2024-07-25 09:22:31.238203] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:9f9f9f9f cdw10:9f9f9f9f cdw11:9f9f9f9f SGL TRANSPORT DATA BLOCK TRANSPORT 0x9f9f9f9f9f9f9f9f 00:07:18.435 [2024-07-25 09:22:31.238227] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.435 [2024-07-25 09:22:31.238286] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (9f) qid:0 cid:5 nsid:9f9f9f9f cdw10:9f9f9f9f cdw11:9f9f9f9f SGL TRANSPORT DATA BLOCK TRANSPORT 0x1a9f9f9f9f9f9f9f 00:07:18.435 [2024-07-25 09:22:31.238297] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:18.435 [2024-07-25 09:22:31.238350] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (9f) qid:0 cid:6 nsid:9f9f9f9f cdw10:9fff9f9f cdw11:9f9f9f9f SGL TRANSPORT DATA BLOCK TRANSPORT 0x9f9f9f9f9f9f9f9f 00:07:18.435 [2024-07-25 09:22:31.238363] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:18.693 #27 NEW cov: 12222 ft: 14240 corp: 20/2258b lim: 320 exec/s: 0 rss: 72Mb L: 232/232 MS: 1 CrossOver- 00:07:18.693 [2024-07-25 09:22:31.288198] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:9f9f9f9f cdw10:9f9f9f9f cdw11:9f9f9f9f SGL TRANSPORT DATA BLOCK TRANSPORT 0x9f9f9f9f9f9f9f9f 00:07:18.693 [2024-07-25 09:22:31.288222] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.693 #28 NEW cov: 12222 ft: 14329 corp: 21/2381b lim: 320 exec/s: 0 rss: 72Mb L: 123/232 MS: 1 CMP- DE: "\016\000"- 00:07:18.693 [2024-07-25 09:22:31.328321] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:9f9f9f9f cdw10:9f9f9f9f cdw11:9f9f9f9f SGL TRANSPORT DATA BLOCK TRANSPORT 0x2e9f9f9f9f9f9f9f 00:07:18.693 [2024-07-25 09:22:31.328345] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.693 #29 NEW cov: 12222 ft: 14341 corp: 22/2490b lim: 320 exec/s: 29 rss: 72Mb L: 109/232 MS: 1 ChangeByte- 00:07:18.693 [2024-07-25 09:22:31.368426] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:9f9f9f9f cdw10:9f9f9f9f cdw11:9f9f9f9f SGL TRANSPORT DATA BLOCK TRANSPORT 0x9f9f9f9f9f9f9f9f 00:07:18.693 [2024-07-25 09:22:31.368450] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.693 #30 NEW cov: 12222 ft: 14350 corp: 23/2587b lim: 320 exec/s: 30 rss: 72Mb L: 97/232 MS: 1 EraseBytes- 00:07:18.693 [2024-07-25 09:22:31.408494] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:9f9f9f9f cdw10:9f9f9f9f cdw11:9f9f9f9f SGL TRANSPORT DATA BLOCK TRANSPORT 0x9f9f9f9f9f9f9f9f 00:07:18.693 [2024-07-25 09:22:31.408518] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.693 #31 NEW cov: 12222 ft: 14377 corp: 24/2672b lim: 320 exec/s: 31 rss: 72Mb L: 85/232 MS: 1 ChangeBinInt- 00:07:18.693 [2024-07-25 09:22:31.458683] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:9f9f9f9f cdw10:9f9f9f9f cdw11:9f9f9f9f SGL TRANSPORT DATA BLOCK TRANSPORT 0x9f9f9f9f9f9f9f9f 00:07:18.693 [2024-07-25 09:22:31.458707] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.693 #32 NEW cov: 12222 ft: 14415 corp: 25/2795b lim: 320 exec/s: 32 rss: 72Mb L: 123/232 MS: 1 ShuffleBytes- 00:07:18.951 [2024-07-25 09:22:31.508791] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:9f9f9f9f cdw10:9f9f9f9f cdw11:9f9f9f9f SGL TRANSPORT DATA BLOCK TRANSPORT 0x9f9f9f9f9f9f9f9f 00:07:18.951 [2024-07-25 09:22:31.508815] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.951 #33 NEW cov: 12222 ft: 14424 corp: 26/2905b lim: 320 exec/s: 33 rss: 73Mb L: 110/232 MS: 1 InsertByte- 00:07:18.951 [2024-07-25 09:22:31.549350] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:9f9f9f9f cdw10:9f9f9f9f cdw11:9f9f9f9f SGL TRANSPORT DATA BLOCK TRANSPORT 0x9f9f9f9f9f9f9f9f 00:07:18.951 [2024-07-25 09:22:31.549374] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.951 [2024-07-25 09:22:31.549428] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (9f) qid:0 cid:5 nsid:9f9f9f9f cdw10:9f9f9f9f cdw11:9f9f9f9f SGL TRANSPORT DATA BLOCK TRANSPORT 0x1a9f9f9f9f9f9f9f 00:07:18.951 [2024-07-25 09:22:31.549439] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:18.951 [2024-07-25 09:22:31.549493] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ed) qid:0 cid:6 nsid:edededed cdw10:edededed cdw11:edededed SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.951 [2024-07-25 09:22:31.549505] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:18.951 [2024-07-25 09:22:31.549559] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ed) qid:0 cid:7 nsid:edededed cdw10:9f9f9f9f cdw11:9f9f9f9f SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.951 [2024-07-25 09:22:31.549570] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:18.951 NEW_FUNC[1/1]: 0x17cb2c0 in nvme_get_sgl_unkeyed /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_qpair.c:143 00:07:18.951 #34 NEW cov: 12235 ft: 15067 corp: 27/3220b lim: 320 exec/s: 34 rss: 73Mb L: 315/315 MS: 1 InsertRepeatedBytes- 00:07:18.951 [2024-07-25 09:22:31.609348] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:9f9f9f9f cdw10:9f9f9f9f cdw11:9f9f9f9f SGL TRANSPORT DATA BLOCK TRANSPORT 0x2e9f9f9f9f9f9f9f 00:07:18.951 [2024-07-25 09:22:31.609372] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.951 [2024-07-25 09:22:31.609426] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (9f) qid:0 cid:5 nsid:9f9f9f9f cdw10:edededed cdw11:edededed SGL TRANSPORT DATA BLOCK TRANSPORT 0xedededededededed 00:07:18.951 [2024-07-25 09:22:31.609437] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:18.951 [2024-07-25 09:22:31.609490] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ed) qid:0 cid:6 nsid:edededed cdw10:edededed cdw11:edededed SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.951 [2024-07-25 09:22:31.609502] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:18.951 #35 NEW cov: 12235 ft: 15411 corp: 28/3454b lim: 320 exec/s: 35 rss: 73Mb L: 234/315 MS: 1 InsertRepeatedBytes- 00:07:18.951 [2024-07-25 09:22:31.659670] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:9f9f9f9f cdw10:9f9f9f9f cdw11:9f9f9f9f SGL TRANSPORT DATA BLOCK TRANSPORT 0x9f9f9f9f9f9f9f9f 00:07:18.952 [2024-07-25 09:22:31.659694] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.952 [2024-07-25 09:22:31.659748] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (9f) qid:0 cid:5 nsid:9f9f9f9f cdw10:9f9f9f9f cdw11:9f9f9f9f SGL TRANSPORT DATA BLOCK TRANSPORT 0x1a9f9f9f9f9f9f9f 00:07:18.952 [2024-07-25 09:22:31.659759] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:18.952 [2024-07-25 09:22:31.659816] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ed) qid:0 cid:6 nsid:edededed cdw10:edededed cdw11:edededed SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.952 [2024-07-25 09:22:31.659828] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:18.952 [2024-07-25 09:22:31.659883] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ed) qid:0 cid:7 nsid:edededed cdw10:9f9f9f9f cdw11:9f9f9f9f SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.952 [2024-07-25 09:22:31.659894] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:18.952 #36 NEW cov: 12235 ft: 15428 corp: 29/3769b lim: 320 exec/s: 36 rss: 73Mb L: 315/315 MS: 1 ChangeBinInt- 00:07:18.952 [2024-07-25 09:22:31.709641] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:9f9f9f9f cdw10:9f9f9f9f cdw11:9f9f9f9f SGL TRANSPORT DATA BLOCK TRANSPORT 0x2e9f9f9f9f9f9f9f 00:07:18.952 [2024-07-25 09:22:31.709665] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.952 [2024-07-25 09:22:31.709719] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (9f) qid:0 cid:5 nsid:9f9f9f9f cdw10:edededed cdw11:edededed SGL TRANSPORT DATA BLOCK TRANSPORT 0xedededededededed 00:07:18.952 [2024-07-25 09:22:31.709730] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:18.952 [2024-07-25 09:22:31.709782] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ed) qid:0 cid:6 nsid:edededed cdw10:edededed cdw11:edededed SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.952 [2024-07-25 09:22:31.709794] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:18.952 #37 NEW cov: 12235 ft: 15440 corp: 30/4003b lim: 320 exec/s: 37 rss: 73Mb L: 234/315 MS: 1 ShuffleBytes- 00:07:19.210 [2024-07-25 09:22:31.759840] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:9f9f9f9f cdw10:9f9f9f9f cdw11:9f9f9f9f SGL TRANSPORT DATA BLOCK TRANSPORT 0x9f9f9f9f9f9f9f9f 00:07:19.210 [2024-07-25 09:22:31.759864] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:19.210 [2024-07-25 09:22:31.759919] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (9f) qid:0 cid:5 nsid:9f9f9f9f cdw10:9f9f329f cdw11:9f9f9f9f SGL TRANSPORT DATA BLOCK TRANSPORT 0x9f9f9f9f9f9f9f9f 00:07:19.210 [2024-07-25 09:22:31.759930] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:19.210 [2024-07-25 09:22:31.759986] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (9f) qid:0 cid:6 nsid:9f9f9f9f cdw10:9f9f9f9f cdw11:9f9f9f9f SGL TRANSPORT DATA BLOCK TRANSPORT 0x9f9f9f9f9f9f9f9f 00:07:19.210 [2024-07-25 09:22:31.759998] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:19.210 [2024-07-25 09:22:31.760055] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (9f) qid:0 cid:7 nsid:9f9f9f9f cdw10:9f9f9f9f cdw11:9f9f9f9f SGL TRANSPORT DATA BLOCK TRANSPORT 0x9f9f9f9f9f9f9f9f 00:07:19.210 [2024-07-25 09:22:31.760066] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:19.210 #38 NEW cov: 12235 ft: 15485 corp: 31/4288b lim: 320 exec/s: 38 rss: 73Mb L: 285/315 MS: 1 CrossOver- 00:07:19.211 [2024-07-25 09:22:31.809958] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:9f9f9f9f cdw10:9f9f9f9f cdw11:9f9f9f9f SGL TRANSPORT DATA BLOCK TRANSPORT 0x9f9f9f9f9f9f9f9f 00:07:19.211 [2024-07-25 09:22:31.809982] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:19.211 [2024-07-25 09:22:31.810038] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (9f) qid:0 cid:5 nsid:9f9f9f9f cdw10:9f9f329f cdw11:9f9f9f9f SGL TRANSPORT DATA BLOCK TRANSPORT 0x9f9f9f9f9f9f9f9f 00:07:19.211 [2024-07-25 09:22:31.810050] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:19.211 [2024-07-25 09:22:31.810109] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (9f) qid:0 cid:6 nsid:9f9f9f9f cdw10:9f9f9f9f cdw11:9f9f9f9f SGL TRANSPORT DATA BLOCK TRANSPORT 0x9f9f9f9f9f9f9f9f 00:07:19.211 [2024-07-25 09:22:31.810122] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:19.211 [2024-07-25 09:22:31.810176] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (9f) qid:0 cid:7 nsid:9f9f9f9f cdw10:9f9f9f9f cdw11:9f9f9f9f SGL TRANSPORT DATA BLOCK TRANSPORT 0x9f9f9f9f9f9f9f9f 00:07:19.211 [2024-07-25 09:22:31.810187] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:19.211 #39 NEW cov: 12235 ft: 15506 corp: 32/4573b lim: 320 exec/s: 39 rss: 73Mb L: 285/315 MS: 1 ShuffleBytes- 00:07:19.211 [2024-07-25 09:22:31.860003] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:9f9f9f9f cdw10:9f9f9f9f cdw11:9f9f9f9f SGL TRANSPORT DATA BLOCK TRANSPORT 0x9f9f9f9f9f9f9f9f 00:07:19.211 [2024-07-25 09:22:31.860028] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:19.211 [2024-07-25 09:22:31.860085] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (9f) qid:0 cid:5 nsid:9f9f000e cdw10:9f9f9f9f cdw11:9f9f9f9f SGL TRANSPORT DATA BLOCK TRANSPORT 0x1a9f9f9f9f9f9f9f 00:07:19.211 [2024-07-25 09:22:31.860097] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:19.211 [2024-07-25 09:22:31.860152] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (9f) qid:0 cid:6 nsid:9f9f9f9f cdw10:9fff9f9f cdw11:9f9f9f9f SGL TRANSPORT DATA BLOCK TRANSPORT 0x9f9f9f9f9f9f9f9f 00:07:19.211 [2024-07-25 09:22:31.860167] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:19.211 #40 NEW cov: 12235 ft: 15536 corp: 33/4805b lim: 320 exec/s: 40 rss: 73Mb L: 232/315 MS: 1 PersAutoDict- DE: "\016\000"- 00:07:19.211 [2024-07-25 09:22:31.899907] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:9f9f9f9f cdw10:9f9f9f9f cdw11:9f9f9f9f SGL TRANSPORT DATA BLOCK TRANSPORT 0x9f9f9f9f9f9f9f9f 00:07:19.211 [2024-07-25 09:22:31.899931] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:19.211 #41 NEW cov: 12235 ft: 15570 corp: 34/4916b lim: 320 exec/s: 41 rss: 74Mb L: 111/315 MS: 1 PersAutoDict- DE: "\016\000"- 00:07:19.211 [2024-07-25 09:22:31.950099] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:9f9f9f9f cdw10:9f9f9f9f cdw11:9f9f9f9f SGL TRANSPORT DATA BLOCK TRANSPORT 0x9f9f9f9f9f9f9f9f 00:07:19.211 [2024-07-25 09:22:31.950122] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:19.211 #42 NEW cov: 12235 ft: 15630 corp: 35/5001b lim: 320 exec/s: 42 rss: 74Mb L: 85/315 MS: 1 ChangeByte- 00:07:19.211 [2024-07-25 09:22:31.990196] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:9f9f9f9f cdw10:9f9f9f9f cdw11:9f9f9f9f SGL TRANSPORT DATA BLOCK TRANSPORT 0x9f9f9f9f9f9f9f9f 00:07:19.211 [2024-07-25 09:22:31.990220] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:19.470 #43 NEW cov: 12235 ft: 15662 corp: 36/5110b lim: 320 exec/s: 43 rss: 74Mb L: 109/315 MS: 1 ShuffleBytes- 00:07:19.470 [2024-07-25 09:22:32.040605] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:9f9f9f9f cdw10:9f9f9f9f cdw11:9f9f9f9f SGL TRANSPORT DATA BLOCK TRANSPORT 0x2e9f9f9f9f9f9f9f 00:07:19.470 [2024-07-25 09:22:32.040628] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:19.470 [2024-07-25 09:22:32.040680] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (9f) qid:0 cid:5 nsid:979f9f1f cdw10:edededed cdw11:edededed SGL TRANSPORT DATA BLOCK TRANSPORT 0xededed9f9f9f9f9f 00:07:19.470 [2024-07-25 09:22:32.040691] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:19.470 [2024-07-25 09:22:32.040744] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ed) qid:0 cid:6 nsid:edededed cdw10:edededed cdw11:edededed SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:19.470 [2024-07-25 09:22:32.040755] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:19.470 #44 NEW cov: 12235 ft: 15677 corp: 37/5344b lim: 320 exec/s: 44 rss: 74Mb L: 234/315 MS: 1 CrossOver- 00:07:19.470 [2024-07-25 09:22:32.080513] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:9f9f9f9f cdw10:9f9f9f9f cdw11:9f9f9f9f SGL TRANSPORT DATA BLOCK TRANSPORT 0x9f9f9f9f9f9f9f9f 00:07:19.470 [2024-07-25 09:22:32.080536] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:19.470 [2024-07-25 09:22:32.080592] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (9f) qid:0 cid:5 nsid:9f9f9f9f cdw10:9f9f9f9f cdw11:619f9f9f SGL TRANSPORT DATA BLOCK TRANSPORT 0x9f9f9f9f9f9f9f9f 00:07:19.470 [2024-07-25 09:22:32.080604] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:19.470 #45 NEW cov: 12235 ft: 15687 corp: 38/5514b lim: 320 exec/s: 45 rss: 74Mb L: 170/315 MS: 1 InsertByte- 00:07:19.470 [2024-07-25 09:22:32.120644] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:9f9f9f9f cdw10:9f9f9f9f cdw11:0000006d SGL TRANSPORT DATA BLOCK TRANSPORT 0x9f9f9f9f9f9f9f9f 00:07:19.470 [2024-07-25 09:22:32.120678] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:19.470 [2024-07-25 09:22:32.120736] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (9f) qid:0 cid:5 nsid:9f9f9f9f cdw10:9f9f9f9f cdw11:9f9f9f9f SGL TRANSPORT DATA BLOCK TRANSPORT 0x9f9f9f9f9f9f9f9f 00:07:19.470 [2024-07-25 09:22:32.120748] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:19.470 #46 NEW cov: 12235 ft: 15697 corp: 39/5671b lim: 320 exec/s: 46 rss: 74Mb L: 157/315 MS: 1 CopyPart- 00:07:19.470 [2024-07-25 09:22:32.170727] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:07:19.470 [2024-07-25 09:22:32.170750] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:19.470 #49 NEW cov: 12235 ft: 15702 corp: 40/5741b lim: 320 exec/s: 49 rss: 74Mb L: 70/315 MS: 3 ChangeBit-InsertRepeatedBytes-InsertRepeatedBytes- 00:07:19.470 [2024-07-25 09:22:32.210821] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:9f669f9f cdw10:9f9f9f9f cdw11:9f9f9f9f SGL TRANSPORT DATA BLOCK TRANSPORT 0x9f9f9f9f9f9f9f9f 00:07:19.470 [2024-07-25 09:22:32.210844] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:19.470 #50 NEW cov: 12235 ft: 15744 corp: 41/5815b lim: 320 exec/s: 50 rss: 74Mb L: 74/315 MS: 1 EraseBytes- 00:07:19.470 [2024-07-25 09:22:32.260985] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:9f9f9f9f cdw10:9f9f9f9f cdw11:9f9f9f9f SGL TRANSPORT DATA BLOCK TRANSPORT 0x9f9f9f9f9f9f9f9f 00:07:19.470 [2024-07-25 09:22:32.261009] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:19.729 #51 NEW cov: 12235 ft: 15754 corp: 42/5938b lim: 320 exec/s: 51 rss: 74Mb L: 123/315 MS: 1 ChangeByte- 00:07:19.729 [2024-07-25 09:22:32.311327] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:9f9f9f9f cdw10:9f9f9f9f cdw11:9f9f9f9f SGL TRANSPORT DATA BLOCK TRANSPORT 0x9f9f9f9f9f9f9f9f 00:07:19.729 [2024-07-25 09:22:32.311350] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:19.729 [2024-07-25 09:22:32.311405] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (9f) qid:0 cid:5 nsid:939f9f9f cdw10:93939393 cdw11:93939393 SGL TRANSPORT DATA BLOCK TRANSPORT 0x9393939393939393 00:07:19.729 [2024-07-25 09:22:32.311416] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:19.729 [2024-07-25 09:22:32.311468] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (93) qid:0 cid:6 nsid:93939393 cdw10:9f9f9f9f cdw11:9f9f9f9f SGL TRANSPORT DATA BLOCK TRANSPORT 0x9f9f9f9f9f9f9f9f 00:07:19.729 [2024-07-25 09:22:32.311480] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:19.729 #52 NEW cov: 12235 ft: 15762 corp: 43/6130b lim: 320 exec/s: 26 rss: 74Mb L: 192/315 MS: 1 InsertRepeatedBytes- 00:07:19.729 #52 DONE cov: 12235 ft: 15762 corp: 43/6130b lim: 320 exec/s: 26 rss: 74Mb 00:07:19.729 ###### Recommended dictionary. ###### 00:07:19.729 "\016\000" # Uses: 2 00:07:19.729 ###### End of recommended dictionary. ###### 00:07:19.729 Done 52 runs in 2 second(s) 00:07:19.729 09:22:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_0.conf /var/tmp/suppress_nvmf_fuzz 00:07:19.729 09:22:32 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:19.729 09:22:32 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:19.729 09:22:32 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 1 1 0x1 00:07:19.729 09:22:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=1 00:07:19.729 09:22:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:19.729 09:22:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:19.729 09:22:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 00:07:19.729 09:22:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_1.conf 00:07:19.729 09:22:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:19.729 09:22:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:19.729 09:22:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 1 00:07:19.729 09:22:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4401 00:07:19.729 09:22:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 00:07:19.729 09:22:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4401' 00:07:19.729 09:22:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4401"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:19.729 09:22:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:19.729 09:22:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:19.729 09:22:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4401' -c /tmp/fuzz_json_1.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 -Z 1 00:07:19.729 [2024-07-25 09:22:32.484664] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:19.729 [2024-07-25 09:22:32.484742] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid415976 ] 00:07:19.729 EAL: No free 2048 kB hugepages reported on node 1 00:07:19.987 [2024-07-25 09:22:32.655863] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.987 [2024-07-25 09:22:32.720012] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.987 [2024-07-25 09:22:32.778454] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:19.987 [2024-07-25 09:22:32.794689] tcp.c:1007:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4401 *** 00:07:20.245 INFO: Running with entropic power schedule (0xFF, 100). 00:07:20.245 INFO: Seed: 1035024592 00:07:20.245 INFO: Loaded 1 modules (359061 inline 8-bit counters): 359061 [0x29c768c, 0x2a1f121), 00:07:20.245 INFO: Loaded 1 PC tables (359061 PCs): 359061 [0x2a1f128,0x2f99a78), 00:07:20.245 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 00:07:20.245 INFO: A corpus is not provided, starting from an empty corpus 00:07:20.245 #2 INITED exec/s: 0 rss: 63Mb 00:07:20.245 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:20.245 This may also happen if the target rejected all inputs we tried so far 00:07:20.245 [2024-07-25 09:22:32.843409] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300009393 00:07:20.245 [2024-07-25 09:22:32.843537] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300009393 00:07:20.245 [2024-07-25 09:22:32.843756] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a0a8393 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.245 [2024-07-25 09:22:32.843785] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:20.245 [2024-07-25 09:22:32.843840] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:93938393 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.245 [2024-07-25 09:22:32.843853] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:20.245 NEW_FUNC[1/701]: 0x484780 in fuzz_admin_get_log_page_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:67 00:07:20.245 NEW_FUNC[2/701]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:20.245 #4 NEW cov: 12041 ft: 12022 corp: 2/16b lim: 30 exec/s: 0 rss: 70Mb L: 15/15 MS: 2 CrossOver-InsertRepeatedBytes- 00:07:20.245 [2024-07-25 09:22:32.973700] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10284) > buf size (4096) 00:07:20.245 [2024-07-25 09:22:32.973836] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300009393 00:07:20.245 [2024-07-25 09:22:32.974061] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a0a0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.245 [2024-07-25 09:22:32.974097] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:20.245 [2024-07-25 09:22:32.974152] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00008300 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.245 [2024-07-25 09:22:32.974165] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:20.245 #5 NEW cov: 12178 ft: 12574 corp: 3/31b lim: 30 exec/s: 0 rss: 70Mb L: 15/15 MS: 1 ChangeBinInt- 00:07:20.245 [2024-07-25 09:22:33.023785] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10276) > buf size (4096) 00:07:20.245 [2024-07-25 09:22:33.023908] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x41 00:07:20.245 [2024-07-25 09:22:33.024137] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a080000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.245 [2024-07-25 09:22:33.024163] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:20.245 [2024-07-25 09:22:33.024217] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.245 [2024-07-25 09:22:33.024231] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:20.245 #8 NEW cov: 12184 ft: 12968 corp: 4/43b lim: 30 exec/s: 0 rss: 70Mb L: 12/15 MS: 3 InsertByte-InsertByte-InsertRepeatedBytes- 00:07:20.503 [2024-07-25 09:22:33.063893] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10276) > buf size (4096) 00:07:20.503 [2024-07-25 09:22:33.064229] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a080000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.503 [2024-07-25 09:22:33.064254] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:20.503 [2024-07-25 09:22:33.064308] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.503 [2024-07-25 09:22:33.064320] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:20.503 #9 NEW cov: 12286 ft: 13276 corp: 5/59b lim: 30 exec/s: 0 rss: 70Mb L: 16/16 MS: 1 CopyPart- 00:07:20.503 [2024-07-25 09:22:33.114096] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10276) > buf size (4096) 00:07:20.503 [2024-07-25 09:22:33.114216] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x41 00:07:20.503 [2024-07-25 09:22:33.114331] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300001f1f 00:07:20.503 [2024-07-25 09:22:33.114547] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a080000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.503 [2024-07-25 09:22:33.114572] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:20.503 [2024-07-25 09:22:33.114632] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.503 [2024-07-25 09:22:33.114644] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:20.503 [2024-07-25 09:22:33.114699] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:1f1f831f cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.503 [2024-07-25 09:22:33.114711] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:20.503 #10 NEW cov: 12286 ft: 13650 corp: 6/77b lim: 30 exec/s: 0 rss: 70Mb L: 18/18 MS: 1 InsertRepeatedBytes- 00:07:20.503 [2024-07-25 09:22:33.154150] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300009393 00:07:20.503 [2024-07-25 09:22:33.154266] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300009393 00:07:20.503 [2024-07-25 09:22:33.154478] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a0a839a cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.503 [2024-07-25 09:22:33.154503] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:20.503 [2024-07-25 09:22:33.154557] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:93938393 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.503 [2024-07-25 09:22:33.154570] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:20.503 #11 NEW cov: 12286 ft: 13782 corp: 7/92b lim: 30 exec/s: 0 rss: 70Mb L: 15/18 MS: 1 ChangeBinInt- 00:07:20.503 [2024-07-25 09:22:33.194230] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300009393 00:07:20.503 [2024-07-25 09:22:33.194449] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a0a839a cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.503 [2024-07-25 09:22:33.194473] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:20.503 #12 NEW cov: 12286 ft: 14201 corp: 8/101b lim: 30 exec/s: 0 rss: 71Mb L: 9/18 MS: 1 EraseBytes- 00:07:20.503 [2024-07-25 09:22:33.244412] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10284) > buf size (4096) 00:07:20.503 [2024-07-25 09:22:33.244535] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300009393 00:07:20.503 [2024-07-25 09:22:33.244746] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a0a0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.503 [2024-07-25 09:22:33.244770] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:20.503 [2024-07-25 09:22:33.244825] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00008300 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.503 [2024-07-25 09:22:33.244837] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:20.503 #13 NEW cov: 12286 ft: 14246 corp: 9/116b lim: 30 exec/s: 0 rss: 71Mb L: 15/18 MS: 1 ShuffleBytes- 00:07:20.503 [2024-07-25 09:22:33.294503] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (4100) > buf size (4096) 00:07:20.503 [2024-07-25 09:22:33.294721] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:04000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.503 [2024-07-25 09:22:33.294746] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:20.759 #16 NEW cov: 12286 ft: 14352 corp: 10/125b lim: 30 exec/s: 0 rss: 71Mb L: 9/18 MS: 3 CrossOver-CMP-CopyPart- DE: "\004\000\000\000"- 00:07:20.759 [2024-07-25 09:22:33.334705] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10276) > buf size (4096) 00:07:20.759 [2024-07-25 09:22:33.334832] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x41 00:07:20.759 [2024-07-25 09:22:33.334946] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300001f1f 00:07:20.759 [2024-07-25 09:22:33.335190] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a080000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.759 [2024-07-25 09:22:33.335216] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:20.759 [2024-07-25 09:22:33.335270] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.759 [2024-07-25 09:22:33.335283] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:20.759 [2024-07-25 09:22:33.335335] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:1f1f831f cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.759 [2024-07-25 09:22:33.335348] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:20.759 #17 NEW cov: 12286 ft: 14400 corp: 11/144b lim: 30 exec/s: 0 rss: 71Mb L: 19/19 MS: 1 InsertByte- 00:07:20.759 [2024-07-25 09:22:33.384844] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10284) > buf size (4096) 00:07:20.759 [2024-07-25 09:22:33.384966] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100006d93 00:07:20.759 [2024-07-25 09:22:33.385180] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a0a0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.759 [2024-07-25 09:22:33.385205] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:20.759 [2024-07-25 09:22:33.385257] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00008100 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.759 [2024-07-25 09:22:33.385269] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:20.759 #18 NEW cov: 12286 ft: 14441 corp: 12/159b lim: 30 exec/s: 0 rss: 71Mb L: 15/19 MS: 1 ChangeBinInt- 00:07:20.759 [2024-07-25 09:22:33.434959] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10276) > buf size (4096) 00:07:20.759 [2024-07-25 09:22:33.435085] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (4100) > buf size (4096) 00:07:20.759 [2024-07-25 09:22:33.435310] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a080000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.759 [2024-07-25 09:22:33.435334] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:20.759 [2024-07-25 09:22:33.435384] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:04000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.759 [2024-07-25 09:22:33.435397] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:20.759 #19 NEW cov: 12286 ft: 14509 corp: 13/175b lim: 30 exec/s: 0 rss: 71Mb L: 16/19 MS: 1 PersAutoDict- DE: "\004\000\000\000"- 00:07:20.759 [2024-07-25 09:22:33.475163] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10252) > buf size (4096) 00:07:20.759 [2024-07-25 09:22:33.475698] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a020000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.759 [2024-07-25 09:22:33.475722] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:20.759 [2024-07-25 09:22:33.475774] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.759 [2024-07-25 09:22:33.475789] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:20.760 [2024-07-25 09:22:33.475841] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.760 [2024-07-25 09:22:33.475852] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:20.760 [2024-07-25 09:22:33.475903] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.760 [2024-07-25 09:22:33.475914] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:20.760 #21 NEW cov: 12286 ft: 15035 corp: 14/202b lim: 30 exec/s: 0 rss: 71Mb L: 27/27 MS: 2 CMP-InsertRepeatedBytes- DE: "\002\000\000\000"- 00:07:20.760 [2024-07-25 09:22:33.515162] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0xf 00:07:20.760 [2024-07-25 09:22:33.515378] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a0a009a cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.760 [2024-07-25 09:22:33.515401] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:20.760 #22 NEW cov: 12286 ft: 15050 corp: 15/211b lim: 30 exec/s: 0 rss: 71Mb L: 9/27 MS: 1 CrossOver- 00:07:20.760 [2024-07-25 09:22:33.565444] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10252) > buf size (4096) 00:07:20.760 [2024-07-25 09:22:33.565979] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a020000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.760 [2024-07-25 09:22:33.566004] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:20.760 [2024-07-25 09:22:33.566057] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.760 [2024-07-25 09:22:33.566074] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:20.760 [2024-07-25 09:22:33.566130] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.760 [2024-07-25 09:22:33.566143] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:20.760 [2024-07-25 09:22:33.566196] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.760 [2024-07-25 09:22:33.566207] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:21.017 #23 NEW cov: 12286 ft: 15080 corp: 16/239b lim: 30 exec/s: 0 rss: 71Mb L: 28/28 MS: 1 InsertByte- 00:07:21.018 [2024-07-25 09:22:33.615553] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10284) > buf size (4096) 00:07:21.018 [2024-07-25 09:22:33.616125] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a0a009a cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.018 [2024-07-25 09:22:33.616149] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:21.018 [2024-07-25 09:22:33.616204] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.018 [2024-07-25 09:22:33.616216] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:21.018 [2024-07-25 09:22:33.616267] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.018 [2024-07-25 09:22:33.616283] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:21.018 [2024-07-25 09:22:33.616335] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.018 [2024-07-25 09:22:33.616347] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:21.018 #25 NEW cov: 12286 ft: 15119 corp: 17/266b lim: 30 exec/s: 0 rss: 72Mb L: 27/28 MS: 2 CrossOver-InsertRepeatedBytes- 00:07:21.018 [2024-07-25 09:22:33.665589] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300000a93 00:07:21.018 [2024-07-25 09:22:33.665803] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:9a93830a cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.018 [2024-07-25 09:22:33.665827] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:21.018 #26 NEW cov: 12286 ft: 15136 corp: 18/275b lim: 30 exec/s: 0 rss: 72Mb L: 9/28 MS: 1 ShuffleBytes- 00:07:21.018 [2024-07-25 09:22:33.705780] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10276) > buf size (4096) 00:07:21.018 [2024-07-25 09:22:33.705904] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x41 00:07:21.018 [2024-07-25 09:22:33.706017] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300001f1f 00:07:21.018 [2024-07-25 09:22:33.706237] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a080000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.018 [2024-07-25 09:22:33.706262] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:21.018 [2024-07-25 09:22:33.706317] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.018 [2024-07-25 09:22:33.706329] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:21.018 [2024-07-25 09:22:33.706379] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:1f1f831f cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.018 [2024-07-25 09:22:33.706391] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:21.018 NEW_FUNC[1/1]: 0x1a8a050 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:07:21.018 #27 NEW cov: 12309 ft: 15179 corp: 19/294b lim: 30 exec/s: 0 rss: 72Mb L: 19/28 MS: 1 ChangeBinInt- 00:07:21.018 [2024-07-25 09:22:33.755847] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x8a 00:07:21.018 [2024-07-25 09:22:33.756060] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a0a008a cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.018 [2024-07-25 09:22:33.756093] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:21.018 #31 NEW cov: 12309 ft: 15226 corp: 20/303b lim: 30 exec/s: 0 rss: 72Mb L: 9/28 MS: 4 CrossOver-ChangeBit-ShuffleBytes-CopyPart- 00:07:21.018 [2024-07-25 09:22:33.795976] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10276) > buf size (4096) 00:07:21.018 [2024-07-25 09:22:33.796107] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x41 00:07:21.018 [2024-07-25 09:22:33.796337] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a080000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.018 [2024-07-25 09:22:33.796362] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:21.018 [2024-07-25 09:22:33.796414] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.018 [2024-07-25 09:22:33.796429] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:21.018 #32 NEW cov: 12309 ft: 15235 corp: 21/315b lim: 30 exec/s: 0 rss: 72Mb L: 12/28 MS: 1 ShuffleBytes- 00:07:21.275 [2024-07-25 09:22:33.836039] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0xaa 00:07:21.275 [2024-07-25 09:22:33.836269] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a0a008a cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.275 [2024-07-25 09:22:33.836293] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:21.275 #33 NEW cov: 12309 ft: 15265 corp: 22/324b lim: 30 exec/s: 33 rss: 72Mb L: 9/28 MS: 1 ChangeBit- 00:07:21.275 [2024-07-25 09:22:33.886188] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300000093 00:07:21.275 [2024-07-25 09:22:33.886404] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:9a0a830a cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.275 [2024-07-25 09:22:33.886427] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:21.275 #34 NEW cov: 12309 ft: 15344 corp: 23/333b lim: 30 exec/s: 34 rss: 72Mb L: 9/28 MS: 1 ShuffleBytes- 00:07:21.275 [2024-07-25 09:22:33.926349] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0xa 00:07:21.275 [2024-07-25 09:22:33.926570] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a0a0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.275 [2024-07-25 09:22:33.926595] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:21.275 #35 NEW cov: 12309 ft: 15382 corp: 24/343b lim: 30 exec/s: 35 rss: 72Mb L: 10/28 MS: 1 CrossOver- 00:07:21.275 [2024-07-25 09:22:33.976491] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10276) > buf size (4096) 00:07:21.275 [2024-07-25 09:22:33.976614] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0xc1 00:07:21.275 [2024-07-25 09:22:33.976825] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a080000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.275 [2024-07-25 09:22:33.976850] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:21.275 [2024-07-25 09:22:33.976902] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.275 [2024-07-25 09:22:33.976916] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:21.275 #36 NEW cov: 12309 ft: 15391 corp: 25/355b lim: 30 exec/s: 36 rss: 72Mb L: 12/28 MS: 1 ChangeBit- 00:07:21.275 [2024-07-25 09:22:34.026657] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000fbfb 00:07:21.275 [2024-07-25 09:22:34.026780] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000fbfb 00:07:21.275 [2024-07-25 09:22:34.027101] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a0883fb cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.275 [2024-07-25 09:22:34.027125] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:21.275 [2024-07-25 09:22:34.027181] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:fbfb83fb cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.275 [2024-07-25 09:22:34.027195] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:21.275 [2024-07-25 09:22:34.027245] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.275 [2024-07-25 09:22:34.027261] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:21.275 #37 NEW cov: 12309 ft: 15404 corp: 26/377b lim: 30 exec/s: 37 rss: 72Mb L: 22/28 MS: 1 InsertRepeatedBytes- 00:07:21.275 [2024-07-25 09:22:34.076761] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300009393 00:07:21.275 [2024-07-25 09:22:34.076885] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300009393 00:07:21.275 [2024-07-25 09:22:34.077119] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a0a8393 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.275 [2024-07-25 09:22:34.077144] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:21.275 [2024-07-25 09:22:34.077195] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:93938393 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.275 [2024-07-25 09:22:34.077208] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:21.532 #38 NEW cov: 12309 ft: 15417 corp: 27/392b lim: 30 exec/s: 38 rss: 72Mb L: 15/28 MS: 1 ShuffleBytes- 00:07:21.532 [2024-07-25 09:22:34.116857] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10276) > buf size (4096) 00:07:21.532 [2024-07-25 09:22:34.117204] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a080000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.532 [2024-07-25 09:22:34.117229] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:21.532 NEW_FUNC[1/2]: 0x11d64c0 in nvmf_ctrlr_unmask_aen /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/ctrlr.c:2282 00:07:21.532 NEW_FUNC[2/2]: 0x11dabd0 in nvmf_get_changed_ns_list_log_page /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/ctrlr.c:2477 00:07:21.532 #39 NEW cov: 12326 ft: 15453 corp: 28/408b lim: 30 exec/s: 39 rss: 72Mb L: 16/28 MS: 1 CopyPart- 00:07:21.532 [2024-07-25 09:22:34.177110] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (532484) > buf size (4096) 00:07:21.532 [2024-07-25 09:22:34.177237] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x41 00:07:21.533 [2024-07-25 09:22:34.177351] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300001f1f 00:07:21.533 [2024-07-25 09:22:34.177588] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:08000200 cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.533 [2024-07-25 09:22:34.177613] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:21.533 [2024-07-25 09:22:34.177665] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.533 [2024-07-25 09:22:34.177678] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:21.533 [2024-07-25 09:22:34.177729] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:1f1f831f cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.533 [2024-07-25 09:22:34.177741] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:21.533 #40 NEW cov: 12326 ft: 15479 corp: 29/427b lim: 30 exec/s: 40 rss: 72Mb L: 19/28 MS: 1 ShuffleBytes- 00:07:21.533 [2024-07-25 09:22:34.217310] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10252) > buf size (4096) 00:07:21.533 [2024-07-25 09:22:34.217434] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (786436) > buf size (4096) 00:07:21.533 [2024-07-25 09:22:34.217963] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a020000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.533 [2024-07-25 09:22:34.217987] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:21.533 [2024-07-25 09:22:34.218043] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:0000830f cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.533 [2024-07-25 09:22:34.218055] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:21.533 [2024-07-25 09:22:34.218115] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.533 [2024-07-25 09:22:34.218128] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:21.533 [2024-07-25 09:22:34.218179] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.533 [2024-07-25 09:22:34.218191] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:21.533 [2024-07-25 09:22:34.218242] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.533 [2024-07-25 09:22:34.218254] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:21.533 #41 NEW cov: 12326 ft: 15554 corp: 30/457b lim: 30 exec/s: 41 rss: 72Mb L: 30/30 MS: 1 CrossOver- 00:07:21.533 [2024-07-25 09:22:34.257223] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10276) > buf size (4096) 00:07:21.533 [2024-07-25 09:22:34.257450] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a080000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.533 [2024-07-25 09:22:34.257474] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:21.533 #42 NEW cov: 12326 ft: 15564 corp: 31/466b lim: 30 exec/s: 42 rss: 72Mb L: 9/30 MS: 1 CrossOver- 00:07:21.533 [2024-07-25 09:22:34.297379] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200009393 00:07:21.533 [2024-07-25 09:22:34.297602] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a0a02ea cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.533 [2024-07-25 09:22:34.297626] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:21.533 #43 NEW cov: 12326 ft: 15578 corp: 32/476b lim: 30 exec/s: 43 rss: 72Mb L: 10/30 MS: 1 InsertByte- 00:07:21.533 [2024-07-25 09:22:34.337499] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (73732) > buf size (4096) 00:07:21.533 [2024-07-25 09:22:34.337726] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:48000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.533 [2024-07-25 09:22:34.337751] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:21.791 #44 NEW cov: 12326 ft: 15624 corp: 33/485b lim: 30 exec/s: 44 rss: 72Mb L: 9/30 MS: 1 CMP- DE: "H\000\000\000\000\000\000\000"- 00:07:21.791 [2024-07-25 09:22:34.377608] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10284) > buf size (4096) 00:07:21.791 [2024-07-25 09:22:34.377731] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x6d93 00:07:21.791 [2024-07-25 09:22:34.377951] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a0a0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.791 [2024-07-25 09:22:34.377976] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:21.791 [2024-07-25 09:22:34.378028] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:000000fc cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.791 [2024-07-25 09:22:34.378040] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:21.791 #45 NEW cov: 12326 ft: 15632 corp: 34/500b lim: 30 exec/s: 45 rss: 72Mb L: 15/30 MS: 1 ChangeBinInt- 00:07:21.791 [2024-07-25 09:22:34.417753] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300009393 00:07:21.791 [2024-07-25 09:22:34.417879] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300009393 00:07:21.791 [2024-07-25 09:22:34.418096] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0e0a8393 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.791 [2024-07-25 09:22:34.418121] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:21.791 [2024-07-25 09:22:34.418173] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:93938393 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.791 [2024-07-25 09:22:34.418186] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:21.792 #46 NEW cov: 12326 ft: 15639 corp: 35/515b lim: 30 exec/s: 46 rss: 72Mb L: 15/30 MS: 1 ChangeBit- 00:07:21.792 [2024-07-25 09:22:34.467897] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10276) > buf size (4096) 00:07:21.792 [2024-07-25 09:22:34.468129] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a080000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.792 [2024-07-25 09:22:34.468152] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:21.792 #47 NEW cov: 12326 ft: 15682 corp: 36/526b lim: 30 exec/s: 47 rss: 72Mb L: 11/30 MS: 1 EraseBytes- 00:07:21.792 [2024-07-25 09:22:34.518016] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10284) > buf size (4096) 00:07:21.792 [2024-07-25 09:22:34.518146] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0xf93 00:07:21.792 [2024-07-25 09:22:34.518367] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a0a0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.792 [2024-07-25 09:22:34.518389] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:21.792 [2024-07-25 09:22:34.518441] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.792 [2024-07-25 09:22:34.518453] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:21.792 #48 NEW cov: 12326 ft: 15696 corp: 37/542b lim: 30 exec/s: 48 rss: 72Mb L: 16/30 MS: 1 InsertByte- 00:07:21.792 [2024-07-25 09:22:34.558170] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300009393 00:07:21.792 [2024-07-25 09:22:34.558298] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300009393 00:07:21.792 [2024-07-25 09:22:34.558511] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a0a8393 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.792 [2024-07-25 09:22:34.558536] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:21.792 [2024-07-25 09:22:34.558588] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:91938393 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.792 [2024-07-25 09:22:34.558601] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:21.792 #49 NEW cov: 12326 ft: 15701 corp: 38/557b lim: 30 exec/s: 49 rss: 72Mb L: 15/30 MS: 1 ChangeBit- 00:07:21.792 [2024-07-25 09:22:34.598401] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300009393 00:07:21.792 [2024-07-25 09:22:34.598530] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (150572) > buf size (4096) 00:07:21.792 [2024-07-25 09:22:34.598648] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x9393 00:07:21.792 [2024-07-25 09:22:34.598771] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (150532) > buf size (4096) 00:07:21.792 [2024-07-25 09:22:34.599007] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a0a839a cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.792 [2024-07-25 09:22:34.599032] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:21.792 [2024-07-25 09:22:34.599087] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:930a0008 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.792 [2024-07-25 09:22:34.599100] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:21.792 [2024-07-25 09:22:34.599154] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:00040000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.792 [2024-07-25 09:22:34.599166] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:21.792 [2024-07-25 09:22:34.599218] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:93000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.792 [2024-07-25 09:22:34.599230] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:22.067 #50 NEW cov: 12326 ft: 15711 corp: 39/585b lim: 30 exec/s: 50 rss: 72Mb L: 28/30 MS: 1 CrossOver- 00:07:22.067 [2024-07-25 09:22:34.638340] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300009393 00:07:22.067 [2024-07-25 09:22:34.638565] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a0a839a cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.067 [2024-07-25 09:22:34.638589] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.067 #51 NEW cov: 12326 ft: 15745 corp: 40/594b lim: 30 exec/s: 51 rss: 72Mb L: 9/30 MS: 1 ChangeBinInt- 00:07:22.067 [2024-07-25 09:22:34.678547] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10276) > buf size (4096) 00:07:22.067 [2024-07-25 09:22:34.678672] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x41 00:07:22.067 [2024-07-25 09:22:34.678788] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300001f1f 00:07:22.067 [2024-07-25 09:22:34.679006] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a080000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.067 [2024-07-25 09:22:34.679030] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.067 [2024-07-25 09:22:34.679088] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.067 [2024-07-25 09:22:34.679103] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:22.067 [2024-07-25 09:22:34.679154] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:1f1f831f cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.067 [2024-07-25 09:22:34.679166] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:22.067 #52 NEW cov: 12326 ft: 15747 corp: 41/613b lim: 30 exec/s: 52 rss: 72Mb L: 19/30 MS: 1 ChangeBit- 00:07:22.067 [2024-07-25 09:22:34.728646] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10284) > buf size (4096) 00:07:22.067 [2024-07-25 09:22:34.728773] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0xf93 00:07:22.067 [2024-07-25 09:22:34.729006] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a0a0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.067 [2024-07-25 09:22:34.729034] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.067 [2024-07-25 09:22:34.729095] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:002f0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.067 [2024-07-25 09:22:34.729109] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:22.067 #53 NEW cov: 12326 ft: 15761 corp: 42/629b lim: 30 exec/s: 53 rss: 72Mb L: 16/30 MS: 1 InsertByte- 00:07:22.067 [2024-07-25 09:22:34.768715] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10284) > buf size (4096) 00:07:22.067 [2024-07-25 09:22:34.768937] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a0a0048 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.067 [2024-07-25 09:22:34.768961] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.067 #54 NEW cov: 12326 ft: 15803 corp: 43/639b lim: 30 exec/s: 54 rss: 72Mb L: 10/30 MS: 1 PersAutoDict- DE: "H\000\000\000\000\000\000\000"- 00:07:22.067 [2024-07-25 09:22:34.818912] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x8a 00:07:22.067 [2024-07-25 09:22:34.819255] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a0a008a cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.067 [2024-07-25 09:22:34.819279] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.067 [2024-07-25 09:22:34.819334] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00040000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.067 [2024-07-25 09:22:34.819347] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:22.067 #55 NEW cov: 12326 ft: 15806 corp: 44/652b lim: 30 exec/s: 27 rss: 72Mb L: 13/30 MS: 1 PersAutoDict- DE: "\004\000\000\000"- 00:07:22.067 #55 DONE cov: 12326 ft: 15806 corp: 44/652b lim: 30 exec/s: 27 rss: 72Mb 00:07:22.067 ###### Recommended dictionary. ###### 00:07:22.067 "\004\000\000\000" # Uses: 2 00:07:22.067 "\002\000\000\000" # Uses: 0 00:07:22.067 "H\000\000\000\000\000\000\000" # Uses: 1 00:07:22.067 ###### End of recommended dictionary. ###### 00:07:22.067 Done 55 runs in 2 second(s) 00:07:22.326 09:22:34 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_1.conf /var/tmp/suppress_nvmf_fuzz 00:07:22.326 09:22:34 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:22.326 09:22:34 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:22.326 09:22:34 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 2 1 0x1 00:07:22.326 09:22:34 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=2 00:07:22.326 09:22:34 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:22.326 09:22:34 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:22.326 09:22:34 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 00:07:22.326 09:22:34 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_2.conf 00:07:22.326 09:22:34 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:22.326 09:22:34 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:22.326 09:22:34 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 2 00:07:22.326 09:22:34 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4402 00:07:22.326 09:22:34 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 00:07:22.326 09:22:34 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4402' 00:07:22.326 09:22:34 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4402"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:22.326 09:22:34 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:22.326 09:22:34 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:22.326 09:22:34 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4402' -c /tmp/fuzz_json_2.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 -Z 2 00:07:22.326 [2024-07-25 09:22:34.990875] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:22.326 [2024-07-25 09:22:34.990952] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid416410 ] 00:07:22.326 EAL: No free 2048 kB hugepages reported on node 1 00:07:22.583 [2024-07-25 09:22:35.154864] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.583 [2024-07-25 09:22:35.220531] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.583 [2024-07-25 09:22:35.278825] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:22.583 [2024-07-25 09:22:35.295065] tcp.c:1007:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4402 *** 00:07:22.583 INFO: Running with entropic power schedule (0xFF, 100). 00:07:22.583 INFO: Seed: 3537023143 00:07:22.583 INFO: Loaded 1 modules (359061 inline 8-bit counters): 359061 [0x29c768c, 0x2a1f121), 00:07:22.584 INFO: Loaded 1 PC tables (359061 PCs): 359061 [0x2a1f128,0x2f99a78), 00:07:22.584 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 00:07:22.584 INFO: A corpus is not provided, starting from an empty corpus 00:07:22.584 #2 INITED exec/s: 0 rss: 64Mb 00:07:22.584 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:22.584 This may also happen if the target rejected all inputs we tried so far 00:07:22.584 [2024-07-25 09:22:35.362181] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:f1f100f1 cdw11:f100f1f1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.584 [2024-07-25 09:22:35.362215] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.584 [2024-07-25 09:22:35.362299] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:f1f100f1 cdw11:f100f1f1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.584 [2024-07-25 09:22:35.362313] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:22.841 NEW_FUNC[1/700]: 0x487230 in fuzz_admin_identify_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:95 00:07:22.841 NEW_FUNC[2/700]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:22.841 #13 NEW cov: 11996 ft: 11993 corp: 2/15b lim: 35 exec/s: 0 rss: 71Mb L: 14/14 MS: 1 InsertRepeatedBytes- 00:07:22.841 [2024-07-25 09:22:35.542764] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:f1f10016 cdw11:f100f1f1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.841 [2024-07-25 09:22:35.542803] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.841 [2024-07-25 09:22:35.542900] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:f1f100f1 cdw11:f100f1f1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.841 [2024-07-25 09:22:35.542914] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:22.841 #14 NEW cov: 12111 ft: 12508 corp: 3/29b lim: 35 exec/s: 0 rss: 71Mb L: 14/14 MS: 1 ChangeBinInt- 00:07:22.841 [2024-07-25 09:22:35.613746] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:f1f10016 cdw11:f100f1f1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.841 [2024-07-25 09:22:35.613774] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.841 [2024-07-25 09:22:35.613853] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:f1f100f1 cdw11:f100f1f1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.841 [2024-07-25 09:22:35.613865] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:22.841 [2024-07-25 09:22:35.613950] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:f1f100f1 cdw11:f100f1f1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.841 [2024-07-25 09:22:35.613961] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:22.841 [2024-07-25 09:22:35.614050] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:f1f100f1 cdw11:f1000af1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.841 [2024-07-25 09:22:35.614061] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:23.098 #15 NEW cov: 12117 ft: 13345 corp: 4/57b lim: 35 exec/s: 0 rss: 71Mb L: 28/28 MS: 1 CrossOver- 00:07:23.098 [2024-07-25 09:22:35.683584] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:f1f10016 cdw11:f100f1f1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.098 [2024-07-25 09:22:35.683608] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:23.098 [2024-07-25 09:22:35.683693] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:f1f100f1 cdw11:f100f116 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.098 [2024-07-25 09:22:35.683705] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:23.098 #21 NEW cov: 12202 ft: 13535 corp: 5/71b lim: 35 exec/s: 0 rss: 71Mb L: 14/28 MS: 1 CopyPart- 00:07:23.098 #23 NEW cov: 12202 ft: 14201 corp: 6/78b lim: 35 exec/s: 0 rss: 72Mb L: 7/28 MS: 2 ChangeBinInt-CrossOver- 00:07:23.098 [2024-07-25 09:22:35.783857] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff00f1 cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.098 [2024-07-25 09:22:35.783881] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:23.098 #25 NEW cov: 12202 ft: 14445 corp: 7/87b lim: 35 exec/s: 0 rss: 72Mb L: 9/28 MS: 2 CrossOver-InsertRepeatedBytes- 00:07:23.098 [2024-07-25 09:22:35.834073] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff00f1 cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.098 [2024-07-25 09:22:35.834096] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:23.098 #26 NEW cov: 12202 ft: 14500 corp: 8/95b lim: 35 exec/s: 0 rss: 72Mb L: 8/28 MS: 1 EraseBytes- 00:07:23.098 [2024-07-25 09:22:35.894860] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:f1f10016 cdw11:f100f1f1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.099 [2024-07-25 09:22:35.894884] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:23.357 #27 NEW cov: 12202 ft: 14526 corp: 9/109b lim: 35 exec/s: 0 rss: 72Mb L: 14/28 MS: 1 CopyPart- 00:07:23.357 [2024-07-25 09:22:35.964890] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:f1f10016 cdw11:f100f1f1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.357 [2024-07-25 09:22:35.964913] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:23.357 [2024-07-25 09:22:35.965016] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:f1f100f1 cdw11:f100f116 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.357 [2024-07-25 09:22:35.965035] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:23.357 #28 NEW cov: 12202 ft: 14571 corp: 10/124b lim: 35 exec/s: 0 rss: 72Mb L: 15/28 MS: 1 InsertByte- 00:07:23.357 #29 NEW cov: 12202 ft: 14613 corp: 11/131b lim: 35 exec/s: 0 rss: 72Mb L: 7/28 MS: 1 ChangeByte- 00:07:23.357 [2024-07-25 09:22:36.075611] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:f1f10016 cdw11:8000f180 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.357 [2024-07-25 09:22:36.075635] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:23.357 [2024-07-25 09:22:36.075737] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:f1f100f1 cdw11:f100f1f1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.357 [2024-07-25 09:22:36.075749] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:23.357 #30 NEW cov: 12202 ft: 14652 corp: 12/149b lim: 35 exec/s: 0 rss: 72Mb L: 18/28 MS: 1 InsertRepeatedBytes- 00:07:23.357 [2024-07-25 09:22:36.135407] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.357 [2024-07-25 09:22:36.135429] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:23.357 #31 NEW cov: 12202 ft: 14681 corp: 13/158b lim: 35 exec/s: 0 rss: 72Mb L: 9/28 MS: 1 CrossOver- 00:07:23.615 [2024-07-25 09:22:36.185712] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:f1f10016 cdw11:f100f1f1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.615 [2024-07-25 09:22:36.185737] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:23.615 #37 NEW cov: 12202 ft: 14695 corp: 14/166b lim: 35 exec/s: 0 rss: 72Mb L: 8/28 MS: 1 EraseBytes- 00:07:23.615 [2024-07-25 09:22:36.236383] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:f1f10016 cdw11:f100f1f1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.615 [2024-07-25 09:22:36.236407] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:23.615 [2024-07-25 09:22:36.236519] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:f1f100f1 cdw11:f100f116 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.615 [2024-07-25 09:22:36.236531] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:23.615 NEW_FUNC[1/1]: 0x1a8a050 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:07:23.615 #38 NEW cov: 12225 ft: 14738 corp: 15/181b lim: 35 exec/s: 0 rss: 72Mb L: 15/28 MS: 1 ShuffleBytes- 00:07:23.615 #39 NEW cov: 12225 ft: 14784 corp: 16/188b lim: 35 exec/s: 0 rss: 72Mb L: 7/28 MS: 1 CMP- DE: "\377\377"- 00:07:23.615 [2024-07-25 09:22:36.346622] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:f1f10016 cdw11:f10016f1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.615 [2024-07-25 09:22:36.346646] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:23.615 #40 NEW cov: 12225 ft: 14814 corp: 17/201b lim: 35 exec/s: 40 rss: 72Mb L: 13/28 MS: 1 CrossOver- 00:07:23.615 [2024-07-25 09:22:36.397060] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:fff100ff cdw11:f100ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.615 [2024-07-25 09:22:36.397094] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:23.873 #41 NEW cov: 12225 ft: 14855 corp: 18/208b lim: 35 exec/s: 41 rss: 72Mb L: 7/28 MS: 1 PersAutoDict- DE: "\377\377"- 00:07:23.873 [2024-07-25 09:22:36.457141] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:23.873 [2024-07-25 09:22:36.457645] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:f1f10016 cdw11:f100f1f1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.873 [2024-07-25 09:22:36.457673] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:23.873 [2024-07-25 09:22:36.457761] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.873 [2024-07-25 09:22:36.457774] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:23.873 #42 NEW cov: 12234 ft: 14896 corp: 19/222b lim: 35 exec/s: 42 rss: 72Mb L: 14/28 MS: 1 ChangeBinInt- 00:07:23.873 [2024-07-25 09:22:36.507798] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:f1f10016 cdw11:f100f1f1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.873 [2024-07-25 09:22:36.507823] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:23.873 [2024-07-25 09:22:36.507915] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:f1f100f1 cdw11:f100f116 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.873 [2024-07-25 09:22:36.507928] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:23.873 #43 NEW cov: 12234 ft: 14901 corp: 20/238b lim: 35 exec/s: 43 rss: 72Mb L: 16/28 MS: 1 InsertByte- 00:07:23.873 [2024-07-25 09:22:36.558145] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:f1f10016 cdw11:f100f1f1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.873 [2024-07-25 09:22:36.558171] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:23.873 [2024-07-25 09:22:36.558258] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:f1f100f1 cdw11:f100f116 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.873 [2024-07-25 09:22:36.558271] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:23.873 #44 NEW cov: 12234 ft: 14905 corp: 21/253b lim: 35 exec/s: 44 rss: 73Mb L: 15/28 MS: 1 ShuffleBytes- 00:07:23.873 #45 NEW cov: 12234 ft: 14922 corp: 22/260b lim: 35 exec/s: 45 rss: 73Mb L: 7/28 MS: 1 ShuffleBytes- 00:07:23.873 [2024-07-25 09:22:36.679103] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:f1f10016 cdw11:f100f1f1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.873 [2024-07-25 09:22:36.679131] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:23.873 [2024-07-25 09:22:36.679226] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:f1f100f1 cdw11:f100f1f1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.873 [2024-07-25 09:22:36.679240] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:23.873 [2024-07-25 09:22:36.679329] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:f1f100f1 cdw11:f100f1f1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.873 [2024-07-25 09:22:36.679343] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:23.873 [2024-07-25 09:22:36.679437] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:f1f100f1 cdw11:f1000abf SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.873 [2024-07-25 09:22:36.679449] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:24.130 #46 NEW cov: 12234 ft: 14936 corp: 23/288b lim: 35 exec/s: 46 rss: 73Mb L: 28/28 MS: 1 ChangeByte- 00:07:24.130 [2024-07-25 09:22:36.748714] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:f3f100f1 cdw11:f100f1f1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.130 [2024-07-25 09:22:36.748744] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:24.130 [2024-07-25 09:22:36.748833] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:f1f100f1 cdw11:f100f1f1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.130 [2024-07-25 09:22:36.748845] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:24.130 #52 NEW cov: 12234 ft: 14938 corp: 24/302b lim: 35 exec/s: 52 rss: 73Mb L: 14/28 MS: 1 ChangeBit- 00:07:24.130 [2024-07-25 09:22:36.798986] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:f1f10012 cdw11:f100f1f1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.130 [2024-07-25 09:22:36.799011] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:24.130 [2024-07-25 09:22:36.799091] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:f1f100f1 cdw11:f100f1f1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.130 [2024-07-25 09:22:36.799104] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:24.130 #53 NEW cov: 12234 ft: 14983 corp: 25/316b lim: 35 exec/s: 53 rss: 73Mb L: 14/28 MS: 1 ChangeBit- 00:07:24.130 [2024-07-25 09:22:36.849728] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:f1f10016 cdw11:f100f1f1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.130 [2024-07-25 09:22:36.849753] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:24.131 [2024-07-25 09:22:36.849838] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:f1f100f1 cdw11:f100f1f1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.131 [2024-07-25 09:22:36.849852] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:24.131 [2024-07-25 09:22:36.849945] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:f1f100f1 cdw11:f100f1f1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.131 [2024-07-25 09:22:36.849958] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:24.131 [2024-07-25 09:22:36.850047] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:f1f100f1 cdw11:f100f1f1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.131 [2024-07-25 09:22:36.850060] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:24.131 #54 NEW cov: 12234 ft: 14993 corp: 26/349b lim: 35 exec/s: 54 rss: 73Mb L: 33/33 MS: 1 CopyPart- 00:07:24.131 [2024-07-25 09:22:36.920074] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:f1f100f1 cdw11:f10016f1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.131 [2024-07-25 09:22:36.920099] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:24.131 [2024-07-25 09:22:36.920193] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:f1f100f1 cdw11:f100f1f1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.131 [2024-07-25 09:22:36.920207] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:24.131 [2024-07-25 09:22:36.920302] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:f1f100f1 cdw11:f100f1f1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.131 [2024-07-25 09:22:36.920317] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:24.131 [2024-07-25 09:22:36.920411] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:f1f100f1 cdw11:f100f1f1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.131 [2024-07-25 09:22:36.920423] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:24.389 #55 NEW cov: 12234 ft: 15076 corp: 27/380b lim: 35 exec/s: 55 rss: 73Mb L: 31/33 MS: 1 CopyPart- 00:07:24.389 #56 NEW cov: 12234 ft: 15088 corp: 28/387b lim: 35 exec/s: 56 rss: 73Mb L: 7/33 MS: 1 ChangeBinInt- 00:07:24.389 #57 NEW cov: 12234 ft: 15096 corp: 29/394b lim: 35 exec/s: 57 rss: 73Mb L: 7/33 MS: 1 ChangeBit- 00:07:24.389 #58 NEW cov: 12234 ft: 15108 corp: 30/401b lim: 35 exec/s: 58 rss: 73Mb L: 7/33 MS: 1 ChangeByte- 00:07:24.389 [2024-07-25 09:22:37.140474] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:f1f100f1 cdw11:ff00f10a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.389 [2024-07-25 09:22:37.140498] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:24.389 #59 NEW cov: 12234 ft: 15146 corp: 31/416b lim: 35 exec/s: 59 rss: 73Mb L: 15/33 MS: 1 CrossOver- 00:07:24.389 [2024-07-25 09:22:37.190189] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff00ff cdw11:ff00f1ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.389 [2024-07-25 09:22:37.190212] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:24.646 #60 NEW cov: 12234 ft: 15153 corp: 32/424b lim: 35 exec/s: 60 rss: 74Mb L: 8/33 MS: 1 ShuffleBytes- 00:07:24.646 [2024-07-25 09:22:37.250832] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:f1f100f1 cdw11:ff00f10a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.646 [2024-07-25 09:22:37.250856] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:24.646 #61 NEW cov: 12234 ft: 15208 corp: 33/440b lim: 35 exec/s: 61 rss: 74Mb L: 16/33 MS: 1 InsertByte- 00:07:24.646 [2024-07-25 09:22:37.320945] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:f3f100f1 cdw11:f100f1f1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.646 [2024-07-25 09:22:37.320969] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:24.646 [2024-07-25 09:22:37.321072] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00f1 cdw11:f100ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.646 [2024-07-25 09:22:37.321084] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:24.646 #62 NEW cov: 12234 ft: 15223 corp: 34/458b lim: 35 exec/s: 31 rss: 74Mb L: 18/33 MS: 1 CMP- DE: "\377\377\377\377"- 00:07:24.646 #62 DONE cov: 12234 ft: 15223 corp: 34/458b lim: 35 exec/s: 31 rss: 74Mb 00:07:24.646 ###### Recommended dictionary. ###### 00:07:24.646 "\377\377" # Uses: 2 00:07:24.647 "\377\377\377\377" # Uses: 0 00:07:24.647 ###### End of recommended dictionary. ###### 00:07:24.647 Done 62 runs in 2 second(s) 00:07:24.904 09:22:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_2.conf /var/tmp/suppress_nvmf_fuzz 00:07:24.904 09:22:37 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:24.904 09:22:37 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:24.904 09:22:37 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 3 1 0x1 00:07:24.904 09:22:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=3 00:07:24.904 09:22:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:24.905 09:22:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:24.905 09:22:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 00:07:24.905 09:22:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_3.conf 00:07:24.905 09:22:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:24.905 09:22:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:24.905 09:22:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 3 00:07:24.905 09:22:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4403 00:07:24.905 09:22:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 00:07:24.905 09:22:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4403' 00:07:24.905 09:22:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4403"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:24.905 09:22:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:24.905 09:22:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:24.905 09:22:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4403' -c /tmp/fuzz_json_3.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 -Z 3 00:07:24.905 [2024-07-25 09:22:37.510035] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:24.905 [2024-07-25 09:22:37.510101] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid416839 ] 00:07:24.905 EAL: No free 2048 kB hugepages reported on node 1 00:07:24.905 [2024-07-25 09:22:37.678064] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.162 [2024-07-25 09:22:37.742287] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.162 [2024-07-25 09:22:37.801434] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:25.162 [2024-07-25 09:22:37.817659] tcp.c:1007:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4403 *** 00:07:25.162 INFO: Running with entropic power schedule (0xFF, 100). 00:07:25.162 INFO: Seed: 1763051978 00:07:25.162 INFO: Loaded 1 modules (359061 inline 8-bit counters): 359061 [0x29c768c, 0x2a1f121), 00:07:25.162 INFO: Loaded 1 PC tables (359061 PCs): 359061 [0x2a1f128,0x2f99a78), 00:07:25.162 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 00:07:25.162 INFO: A corpus is not provided, starting from an empty corpus 00:07:25.162 #2 INITED exec/s: 0 rss: 63Mb 00:07:25.162 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:25.162 This may also happen if the target rejected all inputs we tried so far 00:07:25.419 NEW_FUNC[1/685]: 0x488f00 in fuzz_admin_abort_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:114 00:07:25.419 NEW_FUNC[2/685]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:25.419 #5 NEW cov: 11857 ft: 11892 corp: 2/5b lim: 20 exec/s: 0 rss: 70Mb L: 4/4 MS: 3 ShuffleBytes-ChangeBit-InsertRepeatedBytes- 00:07:25.419 NEW_FUNC[1/4]: 0x10050d0 in posix_sock_group_impl_poll /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/module/sock/posix/posix.c:1982 00:07:25.419 NEW_FUNC[2/4]: 0x1ac1ef0 in spdk_sock_group_poll /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/sock/sock.c:741 00:07:25.419 #11 NEW cov: 12008 ft: 12342 corp: 3/9b lim: 20 exec/s: 0 rss: 71Mb L: 4/4 MS: 1 ShuffleBytes- 00:07:25.419 #12 NEW cov: 12014 ft: 12640 corp: 4/13b lim: 20 exec/s: 0 rss: 71Mb L: 4/4 MS: 1 ShuffleBytes- 00:07:25.419 #13 NEW cov: 12099 ft: 12836 corp: 5/17b lim: 20 exec/s: 0 rss: 71Mb L: 4/4 MS: 1 ChangeBit- 00:07:25.676 #14 NEW cov: 12099 ft: 13076 corp: 6/21b lim: 20 exec/s: 0 rss: 71Mb L: 4/4 MS: 1 ChangeByte- 00:07:25.676 #15 NEW cov: 12099 ft: 13170 corp: 7/25b lim: 20 exec/s: 0 rss: 71Mb L: 4/4 MS: 1 ChangeBinInt- 00:07:25.676 #16 NEW cov: 12099 ft: 13300 corp: 8/30b lim: 20 exec/s: 0 rss: 72Mb L: 5/5 MS: 1 InsertByte- 00:07:25.676 #17 NEW cov: 12099 ft: 13315 corp: 9/36b lim: 20 exec/s: 0 rss: 72Mb L: 6/6 MS: 1 CopyPart- 00:07:25.676 #18 NEW cov: 12099 ft: 13402 corp: 10/40b lim: 20 exec/s: 0 rss: 72Mb L: 4/6 MS: 1 ShuffleBytes- 00:07:25.934 #19 NEW cov: 12099 ft: 13428 corp: 11/45b lim: 20 exec/s: 0 rss: 72Mb L: 5/6 MS: 1 InsertByte- 00:07:25.934 #20 NEW cov: 12099 ft: 13482 corp: 12/50b lim: 20 exec/s: 0 rss: 72Mb L: 5/6 MS: 1 CopyPart- 00:07:25.934 #21 NEW cov: 12099 ft: 13515 corp: 13/56b lim: 20 exec/s: 0 rss: 72Mb L: 6/6 MS: 1 InsertByte- 00:07:25.934 #22 NEW cov: 12099 ft: 13548 corp: 14/60b lim: 20 exec/s: 0 rss: 72Mb L: 4/6 MS: 1 ChangeBinInt- 00:07:25.934 #23 NEW cov: 12099 ft: 13564 corp: 15/64b lim: 20 exec/s: 0 rss: 72Mb L: 4/6 MS: 1 ChangeBit- 00:07:26.190 NEW_FUNC[1/1]: 0x1a8a050 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:07:26.190 #24 NEW cov: 12122 ft: 13633 corp: 16/68b lim: 20 exec/s: 0 rss: 72Mb L: 4/6 MS: 1 ChangeBit- 00:07:26.190 #25 NEW cov: 12122 ft: 13668 corp: 17/74b lim: 20 exec/s: 25 rss: 72Mb L: 6/6 MS: 1 ChangeByte- 00:07:26.190 #26 NEW cov: 12148 ft: 14089 corp: 18/91b lim: 20 exec/s: 26 rss: 72Mb L: 17/17 MS: 1 InsertRepeatedBytes- 00:07:26.190 #27 NEW cov: 12148 ft: 14117 corp: 19/97b lim: 20 exec/s: 27 rss: 72Mb L: 6/17 MS: 1 ShuffleBytes- 00:07:26.447 #28 NEW cov: 12148 ft: 14139 corp: 20/104b lim: 20 exec/s: 28 rss: 72Mb L: 7/17 MS: 1 InsertByte- 00:07:26.447 #29 NEW cov: 12153 ft: 14388 corp: 21/113b lim: 20 exec/s: 29 rss: 72Mb L: 9/17 MS: 1 EraseBytes- 00:07:26.447 #30 NEW cov: 12153 ft: 14440 corp: 22/119b lim: 20 exec/s: 30 rss: 72Mb L: 6/17 MS: 1 ShuffleBytes- 00:07:26.447 NEW_FUNC[1/7]: 0x117ba50 in nvmf_ctrlr_abort_request /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/ctrlr.c:3443 00:07:26.447 NEW_FUNC[2/7]: 0x117c660 in spdk_nvmf_request_get_bdev /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/ctrlr.c:4934 00:07:26.447 #31 NEW cov: 12288 ft: 14712 corp: 23/123b lim: 20 exec/s: 31 rss: 73Mb L: 4/17 MS: 1 ChangeBinInt- 00:07:26.447 #32 NEW cov: 12288 ft: 14731 corp: 24/128b lim: 20 exec/s: 32 rss: 73Mb L: 5/17 MS: 1 InsertByte- 00:07:26.706 #33 NEW cov: 12292 ft: 14886 corp: 25/143b lim: 20 exec/s: 33 rss: 73Mb L: 15/17 MS: 1 CrossOver- 00:07:26.706 #34 NEW cov: 12292 ft: 14900 corp: 26/147b lim: 20 exec/s: 34 rss: 73Mb L: 4/17 MS: 1 CrossOver- 00:07:26.706 #36 NEW cov: 12292 ft: 14976 corp: 27/151b lim: 20 exec/s: 36 rss: 73Mb L: 4/17 MS: 2 EraseBytes-InsertByte- 00:07:26.706 #37 NEW cov: 12292 ft: 15003 corp: 28/168b lim: 20 exec/s: 37 rss: 73Mb L: 17/17 MS: 1 ChangeBit- 00:07:26.964 #38 NEW cov: 12292 ft: 15020 corp: 29/172b lim: 20 exec/s: 38 rss: 73Mb L: 4/17 MS: 1 ChangeBit- 00:07:26.964 #39 NEW cov: 12292 ft: 15041 corp: 30/178b lim: 20 exec/s: 39 rss: 73Mb L: 6/17 MS: 1 ChangeBinInt- 00:07:26.964 #40 NEW cov: 12292 ft: 15076 corp: 31/183b lim: 20 exec/s: 40 rss: 73Mb L: 5/17 MS: 1 ChangeBit- 00:07:26.964 [2024-07-25 09:22:39.668006] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:07:26.964 [2024-07-25 09:22:39.668043] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:26.964 NEW_FUNC[1/15]: 0x16c5ef0 in nvme_ctrlr_queue_async_event /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_ctrlr.c:3267 00:07:26.964 NEW_FUNC[2/15]: 0x16eaf70 in nvme_ctrlr_process_async_event /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_ctrlr.c:3227 00:07:26.964 #41 NEW cov: 12509 ft: 15352 corp: 32/197b lim: 20 exec/s: 41 rss: 73Mb L: 14/17 MS: 1 InsertRepeatedBytes- 00:07:26.964 #42 NEW cov: 12509 ft: 15368 corp: 33/215b lim: 20 exec/s: 42 rss: 73Mb L: 18/18 MS: 1 InsertRepeatedBytes- 00:07:27.223 #43 NEW cov: 12509 ft: 15380 corp: 34/220b lim: 20 exec/s: 43 rss: 74Mb L: 5/18 MS: 1 InsertByte- 00:07:27.223 #44 NEW cov: 12509 ft: 15385 corp: 35/237b lim: 20 exec/s: 22 rss: 74Mb L: 17/18 MS: 1 InsertRepeatedBytes- 00:07:27.223 #44 DONE cov: 12509 ft: 15385 corp: 35/237b lim: 20 exec/s: 22 rss: 74Mb 00:07:27.223 Done 44 runs in 2 second(s) 00:07:27.223 09:22:39 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_3.conf /var/tmp/suppress_nvmf_fuzz 00:07:27.223 09:22:39 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:27.223 09:22:39 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:27.223 09:22:39 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 4 1 0x1 00:07:27.223 09:22:39 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=4 00:07:27.223 09:22:39 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:27.223 09:22:39 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:27.223 09:22:39 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 00:07:27.223 09:22:39 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_4.conf 00:07:27.223 09:22:39 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:27.223 09:22:39 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:27.223 09:22:39 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 4 00:07:27.223 09:22:39 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4404 00:07:27.223 09:22:39 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 00:07:27.223 09:22:39 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4404' 00:07:27.223 09:22:39 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4404"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:27.223 09:22:39 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:27.223 09:22:39 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:27.223 09:22:39 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4404' -c /tmp/fuzz_json_4.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 -Z 4 00:07:27.223 [2024-07-25 09:22:40.013662] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:27.223 [2024-07-25 09:22:40.013740] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid417278 ] 00:07:27.482 EAL: No free 2048 kB hugepages reported on node 1 00:07:27.482 [2024-07-25 09:22:40.182242] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.482 [2024-07-25 09:22:40.246408] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.740 [2024-07-25 09:22:40.304647] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:27.740 [2024-07-25 09:22:40.320869] tcp.c:1007:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4404 *** 00:07:27.740 INFO: Running with entropic power schedule (0xFF, 100). 00:07:27.740 INFO: Seed: 4268057278 00:07:27.740 INFO: Loaded 1 modules (359061 inline 8-bit counters): 359061 [0x29c768c, 0x2a1f121), 00:07:27.740 INFO: Loaded 1 PC tables (359061 PCs): 359061 [0x2a1f128,0x2f99a78), 00:07:27.740 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 00:07:27.740 INFO: A corpus is not provided, starting from an empty corpus 00:07:27.740 #2 INITED exec/s: 0 rss: 63Mb 00:07:27.740 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:27.740 This may also happen if the target rejected all inputs we tried so far 00:07:27.740 [2024-07-25 09:22:40.365578] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffff0aff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.740 [2024-07-25 09:22:40.365609] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:27.740 NEW_FUNC[1/701]: 0x489ff0 in fuzz_admin_create_io_completion_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:126 00:07:27.740 NEW_FUNC[2/701]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:27.740 #3 NEW cov: 12019 ft: 12005 corp: 2/13b lim: 35 exec/s: 0 rss: 71Mb L: 12/12 MS: 1 InsertRepeatedBytes- 00:07:27.740 [2024-07-25 09:22:40.535929] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffff0a0a cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.740 [2024-07-25 09:22:40.535970] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:27.998 #14 NEW cov: 12132 ft: 12625 corp: 3/24b lim: 35 exec/s: 0 rss: 71Mb L: 11/12 MS: 1 CrossOver- 00:07:27.998 [2024-07-25 09:22:40.596027] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffff0cff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.998 [2024-07-25 09:22:40.596058] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:27.998 #15 NEW cov: 12138 ft: 12862 corp: 4/36b lim: 35 exec/s: 0 rss: 71Mb L: 12/12 MS: 1 ChangeBinInt- 00:07:27.998 [2024-07-25 09:22:40.676307] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ff000a0a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.998 [2024-07-25 09:22:40.676346] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:27.998 [2024-07-25 09:22:40.676375] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffff0000 cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.998 [2024-07-25 09:22:40.676388] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:27.998 #16 NEW cov: 12223 ft: 13827 corp: 5/53b lim: 35 exec/s: 0 rss: 72Mb L: 17/17 MS: 1 InsertRepeatedBytes- 00:07:27.998 [2024-07-25 09:22:40.766579] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffff0aff cdw11:ffff0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.998 [2024-07-25 09:22:40.766609] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:27.998 [2024-07-25 09:22:40.766639] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.998 [2024-07-25 09:22:40.766653] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:28.256 #17 NEW cov: 12223 ft: 13959 corp: 6/69b lim: 35 exec/s: 0 rss: 72Mb L: 16/17 MS: 1 CrossOver- 00:07:28.256 [2024-07-25 09:22:40.826716] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.256 [2024-07-25 09:22:40.826744] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:28.256 [2024-07-25 09:22:40.826773] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ff0c0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.256 [2024-07-25 09:22:40.826785] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:28.256 [2024-07-25 09:22:40.826810] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.256 [2024-07-25 09:22:40.826822] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:28.256 #18 NEW cov: 12223 ft: 14255 corp: 7/93b lim: 35 exec/s: 0 rss: 72Mb L: 24/24 MS: 1 InsertRepeatedBytes- 00:07:28.256 [2024-07-25 09:22:40.916903] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffff0adf cdw11:ffff0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.256 [2024-07-25 09:22:40.916931] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:28.256 [2024-07-25 09:22:40.916959] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.256 [2024-07-25 09:22:40.916971] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:28.256 #19 NEW cov: 12223 ft: 14293 corp: 8/109b lim: 35 exec/s: 0 rss: 72Mb L: 16/24 MS: 1 ChangeBit- 00:07:28.256 [2024-07-25 09:22:41.007124] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ff7f0adf cdw11:ffff0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.256 [2024-07-25 09:22:41.007153] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:28.256 [2024-07-25 09:22:41.007181] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.256 [2024-07-25 09:22:41.007194] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:28.514 #20 NEW cov: 12223 ft: 14327 corp: 9/125b lim: 35 exec/s: 0 rss: 72Mb L: 16/24 MS: 1 ChangeBit- 00:07:28.514 [2024-07-25 09:22:41.087337] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ff7f0adf cdw11:ffff0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.514 [2024-07-25 09:22:41.087364] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:28.515 [2024-07-25 09:22:41.087392] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ff46ffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.515 [2024-07-25 09:22:41.087404] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:28.515 #21 NEW cov: 12223 ft: 14409 corp: 10/141b lim: 35 exec/s: 0 rss: 72Mb L: 16/24 MS: 1 ChangeByte- 00:07:28.515 [2024-07-25 09:22:41.167608] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.515 [2024-07-25 09:22:41.167635] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:28.515 [2024-07-25 09:22:41.167663] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffff0c cdw11:ff0c0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.515 [2024-07-25 09:22:41.167675] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:28.515 [2024-07-25 09:22:41.167700] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.515 [2024-07-25 09:22:41.167712] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:28.515 #22 NEW cov: 12223 ft: 14501 corp: 11/165b lim: 35 exec/s: 0 rss: 72Mb L: 24/24 MS: 1 CopyPart- 00:07:28.515 [2024-07-25 09:22:41.257914] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.515 [2024-07-25 09:22:41.257940] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:28.515 [2024-07-25 09:22:41.257968] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffff0c cdw11:ff0c0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.515 [2024-07-25 09:22:41.257981] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:28.515 [2024-07-25 09:22:41.258005] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.515 [2024-07-25 09:22:41.258018] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:28.515 [2024-07-25 09:22:41.258042] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.515 [2024-07-25 09:22:41.258059] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:28.773 NEW_FUNC[1/1]: 0x1a8a050 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:07:28.773 #23 NEW cov: 12246 ft: 14842 corp: 12/194b lim: 35 exec/s: 0 rss: 72Mb L: 29/29 MS: 1 InsertRepeatedBytes- 00:07:28.773 [2024-07-25 09:22:41.348136] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00090003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.773 [2024-07-25 09:22:41.348164] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:28.773 [2024-07-25 09:22:41.348193] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffff0c cdw11:ff0c0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.773 [2024-07-25 09:22:41.348206] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:28.773 [2024-07-25 09:22:41.348231] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.773 [2024-07-25 09:22:41.348244] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:28.773 [2024-07-25 09:22:41.348269] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.773 [2024-07-25 09:22:41.348282] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:28.773 #24 NEW cov: 12246 ft: 14927 corp: 13/223b lim: 35 exec/s: 24 rss: 72Mb L: 29/29 MS: 1 ChangeBinInt- 00:07:28.773 [2024-07-25 09:22:41.428177] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffff0a0a cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.773 [2024-07-25 09:22:41.428204] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:28.773 #25 NEW cov: 12246 ft: 14996 corp: 14/235b lim: 35 exec/s: 25 rss: 72Mb L: 12/29 MS: 1 InsertByte- 00:07:28.773 [2024-07-25 09:22:41.488452] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffff0adf cdw11:ffff0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.773 [2024-07-25 09:22:41.488479] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:28.773 [2024-07-25 09:22:41.488509] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:80808080 cdw11:0aff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.773 [2024-07-25 09:22:41.488522] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:28.773 [2024-07-25 09:22:41.488548] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.773 [2024-07-25 09:22:41.488561] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:28.773 #26 NEW cov: 12246 ft: 15008 corp: 15/256b lim: 35 exec/s: 26 rss: 72Mb L: 21/29 MS: 1 InsertRepeatedBytes- 00:07:28.773 [2024-07-25 09:22:41.548606] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.773 [2024-07-25 09:22:41.548634] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:28.773 [2024-07-25 09:22:41.548664] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffff0c cdw11:ff0c0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.773 [2024-07-25 09:22:41.548677] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:28.773 [2024-07-25 09:22:41.548707] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.773 [2024-07-25 09:22:41.548720] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:29.031 #27 NEW cov: 12246 ft: 15031 corp: 16/277b lim: 35 exec/s: 27 rss: 72Mb L: 21/29 MS: 1 EraseBytes- 00:07:29.031 [2024-07-25 09:22:41.608696] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffff0aff cdw11:ffff0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.031 [2024-07-25 09:22:41.608723] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:29.031 [2024-07-25 09:22:41.608750] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.031 [2024-07-25 09:22:41.608762] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:29.031 #33 NEW cov: 12246 ft: 15041 corp: 17/293b lim: 35 exec/s: 33 rss: 72Mb L: 16/29 MS: 1 ShuffleBytes- 00:07:29.031 [2024-07-25 09:22:41.668960] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ff000a0a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.031 [2024-07-25 09:22:41.668988] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:29.031 [2024-07-25 09:22:41.669016] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffff0000 cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.031 [2024-07-25 09:22:41.669028] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:29.031 [2024-07-25 09:22:41.669052] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ff07ffff cdw11:07070000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.031 [2024-07-25 09:22:41.669065] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:29.031 [2024-07-25 09:22:41.669095] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:07070707 cdw11:07070000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.031 [2024-07-25 09:22:41.669124] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:29.031 #34 NEW cov: 12246 ft: 15105 corp: 18/327b lim: 35 exec/s: 34 rss: 72Mb L: 34/34 MS: 1 InsertRepeatedBytes- 00:07:29.031 [2024-07-25 09:22:41.759238] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00090003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.031 [2024-07-25 09:22:41.759268] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:29.031 [2024-07-25 09:22:41.759298] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:0c0c0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.031 [2024-07-25 09:22:41.759322] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:29.031 [2024-07-25 09:22:41.759345] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.031 [2024-07-25 09:22:41.759357] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:29.031 [2024-07-25 09:22:41.759381] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.031 [2024-07-25 09:22:41.759393] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:29.031 #35 NEW cov: 12246 ft: 15118 corp: 19/356b lim: 35 exec/s: 35 rss: 72Mb L: 29/34 MS: 1 ShuffleBytes- 00:07:29.289 [2024-07-25 09:22:41.849394] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ff7f0adf cdw11:ffff0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.289 [2024-07-25 09:22:41.849423] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:29.289 [2024-07-25 09:22:41.849452] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.289 [2024-07-25 09:22:41.849465] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:29.289 [2024-07-25 09:22:41.849491] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.289 [2024-07-25 09:22:41.849504] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:29.289 #36 NEW cov: 12246 ft: 15143 corp: 20/382b lim: 35 exec/s: 36 rss: 73Mb L: 26/34 MS: 1 InsertRepeatedBytes- 00:07:29.289 [2024-07-25 09:22:41.939604] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffff0adf cdw11:ffff0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.289 [2024-07-25 09:22:41.939633] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:29.289 [2024-07-25 09:22:41.939663] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:03000003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.289 [2024-07-25 09:22:41.939676] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:29.289 #37 NEW cov: 12246 ft: 15153 corp: 21/398b lim: 35 exec/s: 37 rss: 73Mb L: 16/34 MS: 1 ChangeBinInt- 00:07:29.289 [2024-07-25 09:22:41.999735] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffff0adf cdw11:0aff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.289 [2024-07-25 09:22:41.999766] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:29.289 [2024-07-25 09:22:41.999796] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.289 [2024-07-25 09:22:41.999808] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:29.289 #38 NEW cov: 12246 ft: 15174 corp: 22/412b lim: 35 exec/s: 38 rss: 73Mb L: 14/34 MS: 1 EraseBytes- 00:07:29.289 [2024-07-25 09:22:42.059941] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.289 [2024-07-25 09:22:42.059968] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:29.289 [2024-07-25 09:22:42.059997] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ff0c0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.289 [2024-07-25 09:22:42.060009] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:29.289 [2024-07-25 09:22:42.060033] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.289 [2024-07-25 09:22:42.060044] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:29.547 #39 NEW cov: 12246 ft: 15189 corp: 23/436b lim: 35 exec/s: 39 rss: 73Mb L: 24/34 MS: 1 ShuffleBytes- 00:07:29.547 [2024-07-25 09:22:42.120083] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00090003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.547 [2024-07-25 09:22:42.120114] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:29.547 [2024-07-25 09:22:42.120142] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:0c0c0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.547 [2024-07-25 09:22:42.120155] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:29.547 [2024-07-25 09:22:42.120179] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.548 [2024-07-25 09:22:42.120192] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:29.548 #40 NEW cov: 12246 ft: 15201 corp: 24/462b lim: 35 exec/s: 40 rss: 73Mb L: 26/34 MS: 1 EraseBytes- 00:07:29.548 [2024-07-25 09:22:42.200246] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffff0adf cdw11:ffff0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.548 [2024-07-25 09:22:42.200273] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:29.548 [2024-07-25 09:22:42.200301] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:03000003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.548 [2024-07-25 09:22:42.200313] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:29.548 #41 NEW cov: 12246 ft: 15230 corp: 25/478b lim: 35 exec/s: 41 rss: 73Mb L: 16/34 MS: 1 ChangeBit- 00:07:29.548 [2024-07-25 09:22:42.280454] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ff0a0adf cdw11:7fff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.548 [2024-07-25 09:22:42.280481] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:29.548 [2024-07-25 09:22:42.280509] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffff0aff cdw11:46ff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.548 [2024-07-25 09:22:42.280521] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:29.548 #42 NEW cov: 12246 ft: 15279 corp: 26/493b lim: 35 exec/s: 42 rss: 73Mb L: 15/34 MS: 1 CrossOver- 00:07:29.548 [2024-07-25 09:22:42.340650] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ff000b0a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.548 [2024-07-25 09:22:42.340679] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:29.548 [2024-07-25 09:22:42.340707] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffff0000 cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.548 [2024-07-25 09:22:42.340720] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:29.806 #43 NEW cov: 12246 ft: 15282 corp: 27/510b lim: 35 exec/s: 21 rss: 73Mb L: 17/34 MS: 1 ChangeBit- 00:07:29.806 #43 DONE cov: 12246 ft: 15282 corp: 27/510b lim: 35 exec/s: 21 rss: 73Mb 00:07:29.806 Done 43 runs in 2 second(s) 00:07:29.806 09:22:42 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_4.conf /var/tmp/suppress_nvmf_fuzz 00:07:29.806 09:22:42 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:29.806 09:22:42 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:29.806 09:22:42 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 5 1 0x1 00:07:29.806 09:22:42 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=5 00:07:29.806 09:22:42 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:29.807 09:22:42 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:29.807 09:22:42 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 00:07:29.807 09:22:42 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_5.conf 00:07:29.807 09:22:42 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:29.807 09:22:42 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:29.807 09:22:42 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 5 00:07:29.807 09:22:42 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4405 00:07:29.807 09:22:42 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 00:07:29.807 09:22:42 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4405' 00:07:29.807 09:22:42 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4405"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:29.807 09:22:42 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:29.807 09:22:42 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:29.807 09:22:42 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4405' -c /tmp/fuzz_json_5.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 -Z 5 00:07:29.807 [2024-07-25 09:22:42.533139] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:29.807 [2024-07-25 09:22:42.533218] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid417707 ] 00:07:29.807 EAL: No free 2048 kB hugepages reported on node 1 00:07:30.065 [2024-07-25 09:22:42.695018] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.065 [2024-07-25 09:22:42.758965] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.065 [2024-07-25 09:22:42.817201] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:30.065 [2024-07-25 09:22:42.833429] tcp.c:1007:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4405 *** 00:07:30.065 INFO: Running with entropic power schedule (0xFF, 100). 00:07:30.065 INFO: Seed: 2486088661 00:07:30.065 INFO: Loaded 1 modules (359061 inline 8-bit counters): 359061 [0x29c768c, 0x2a1f121), 00:07:30.065 INFO: Loaded 1 PC tables (359061 PCs): 359061 [0x2a1f128,0x2f99a78), 00:07:30.065 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 00:07:30.065 INFO: A corpus is not provided, starting from an empty corpus 00:07:30.065 #2 INITED exec/s: 0 rss: 63Mb 00:07:30.065 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:30.065 This may also happen if the target rejected all inputs we tried so far 00:07:30.323 [2024-07-25 09:22:42.878964] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.323 [2024-07-25 09:22:42.878989] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:30.323 [2024-07-25 09:22:42.879041] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.323 [2024-07-25 09:22:42.879052] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:30.323 NEW_FUNC[1/701]: 0x48c180 in fuzz_admin_create_io_submission_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:142 00:07:30.323 NEW_FUNC[2/701]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:30.323 #13 NEW cov: 12030 ft: 12026 corp: 2/27b lim: 45 exec/s: 0 rss: 70Mb L: 26/26 MS: 1 InsertRepeatedBytes- 00:07:30.323 [2024-07-25 09:22:43.029439] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.323 [2024-07-25 09:22:43.029468] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:30.323 [2024-07-25 09:22:43.029517] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.323 [2024-07-25 09:22:43.029528] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:30.323 [2024-07-25 09:22:43.029575] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.323 [2024-07-25 09:22:43.029586] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:30.323 #14 NEW cov: 12143 ft: 12802 corp: 3/61b lim: 45 exec/s: 0 rss: 70Mb L: 34/34 MS: 1 CopyPart- 00:07:30.323 [2024-07-25 09:22:43.089207] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.323 [2024-07-25 09:22:43.089233] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:30.323 #20 NEW cov: 12149 ft: 13782 corp: 4/77b lim: 45 exec/s: 0 rss: 70Mb L: 16/34 MS: 1 EraseBytes- 00:07:30.323 [2024-07-25 09:22:43.129511] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffff0aff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.323 [2024-07-25 09:22:43.129535] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:30.323 [2024-07-25 09:22:43.129587] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.324 [2024-07-25 09:22:43.129599] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:30.582 #27 NEW cov: 12234 ft: 14147 corp: 5/99b lim: 45 exec/s: 0 rss: 70Mb L: 22/34 MS: 2 CopyPart-InsertRepeatedBytes- 00:07:30.582 [2024-07-25 09:22:43.169632] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.582 [2024-07-25 09:22:43.169656] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:30.582 [2024-07-25 09:22:43.169704] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.582 [2024-07-25 09:22:43.169716] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:30.582 #28 NEW cov: 12234 ft: 14220 corp: 6/119b lim: 45 exec/s: 0 rss: 70Mb L: 20/34 MS: 1 EraseBytes- 00:07:30.582 [2024-07-25 09:22:43.209524] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.582 [2024-07-25 09:22:43.209546] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:30.582 #29 NEW cov: 12234 ft: 14289 corp: 7/135b lim: 45 exec/s: 0 rss: 71Mb L: 16/34 MS: 1 ChangeByte- 00:07:30.582 [2024-07-25 09:22:43.259752] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.582 [2024-07-25 09:22:43.259776] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:30.582 #30 NEW cov: 12234 ft: 14369 corp: 8/151b lim: 45 exec/s: 0 rss: 71Mb L: 16/34 MS: 1 CopyPart- 00:07:30.582 [2024-07-25 09:22:43.299822] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.582 [2024-07-25 09:22:43.299845] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:30.582 #31 NEW cov: 12234 ft: 14462 corp: 9/168b lim: 45 exec/s: 0 rss: 71Mb L: 17/34 MS: 1 InsertByte- 00:07:30.582 [2024-07-25 09:22:43.350468] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.582 [2024-07-25 09:22:43.350493] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:30.582 [2024-07-25 09:22:43.350544] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:0000ab00 cdw11:00e90007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.582 [2024-07-25 09:22:43.350556] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:30.582 [2024-07-25 09:22:43.350605] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:e9e9e9e9 cdw11:e9e90007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.582 [2024-07-25 09:22:43.350615] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:30.582 [2024-07-25 09:22:43.350662] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:e9e9e9e9 cdw11:e9e90007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.582 [2024-07-25 09:22:43.350673] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:30.582 #32 NEW cov: 12234 ft: 14852 corp: 10/211b lim: 45 exec/s: 0 rss: 71Mb L: 43/43 MS: 1 InsertRepeatedBytes- 00:07:30.839 [2024-07-25 09:22:43.400082] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.839 [2024-07-25 09:22:43.400105] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:30.839 #33 NEW cov: 12234 ft: 14884 corp: 11/227b lim: 45 exec/s: 0 rss: 71Mb L: 16/43 MS: 1 ShuffleBytes- 00:07:30.839 [2024-07-25 09:22:43.440328] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.839 [2024-07-25 09:22:43.440350] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:30.839 [2024-07-25 09:22:43.440399] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.839 [2024-07-25 09:22:43.440411] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:30.839 #34 NEW cov: 12234 ft: 14899 corp: 12/252b lim: 45 exec/s: 0 rss: 71Mb L: 25/43 MS: 1 EraseBytes- 00:07:30.839 [2024-07-25 09:22:43.490805] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.839 [2024-07-25 09:22:43.490828] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:30.839 [2024-07-25 09:22:43.490876] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:0000ab00 cdw11:00e90007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.839 [2024-07-25 09:22:43.490887] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:30.839 [2024-07-25 09:22:43.490934] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:e9e9e9e9 cdw11:e9e90007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.839 [2024-07-25 09:22:43.490947] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:30.839 [2024-07-25 09:22:43.490993] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:e9e9e9e9 cdw11:e9e90007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.839 [2024-07-25 09:22:43.491004] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:30.839 #35 NEW cov: 12234 ft: 14936 corp: 13/295b lim: 45 exec/s: 0 rss: 72Mb L: 43/43 MS: 1 CrossOver- 00:07:30.839 [2024-07-25 09:22:43.540849] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.839 [2024-07-25 09:22:43.540871] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:30.839 [2024-07-25 09:22:43.540916] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.839 [2024-07-25 09:22:43.540927] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:30.839 [2024-07-25 09:22:43.540975] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.839 [2024-07-25 09:22:43.540986] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:30.839 #36 NEW cov: 12234 ft: 14952 corp: 14/323b lim: 45 exec/s: 0 rss: 72Mb L: 28/43 MS: 1 EraseBytes- 00:07:30.839 [2024-07-25 09:22:43.580747] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.839 [2024-07-25 09:22:43.580769] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:30.839 [2024-07-25 09:22:43.580816] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.839 [2024-07-25 09:22:43.580827] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:30.839 #37 NEW cov: 12234 ft: 14981 corp: 15/348b lim: 45 exec/s: 0 rss: 72Mb L: 25/43 MS: 1 CopyPart- 00:07:30.839 [2024-07-25 09:22:43.621224] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.839 [2024-07-25 09:22:43.621247] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:30.839 [2024-07-25 09:22:43.621296] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:0000ab00 cdw11:00e90007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.839 [2024-07-25 09:22:43.621307] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:30.839 [2024-07-25 09:22:43.621354] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:e9e9e9e9 cdw11:e9e90007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.839 [2024-07-25 09:22:43.621365] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:30.839 [2024-07-25 09:22:43.621412] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:e9e9e9e9 cdw11:e9e90007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.839 [2024-07-25 09:22:43.621422] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:31.097 #38 NEW cov: 12234 ft: 14994 corp: 16/391b lim: 45 exec/s: 0 rss: 72Mb L: 43/43 MS: 1 CMP- DE: "\000\027\254Q\005\300\240\230"- 00:07:31.097 [2024-07-25 09:22:43.671186] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffff0aff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.097 [2024-07-25 09:22:43.671209] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:31.097 [2024-07-25 09:22:43.671259] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.097 [2024-07-25 09:22:43.671269] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:31.097 [2024-07-25 09:22:43.671317] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.097 [2024-07-25 09:22:43.671328] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:31.097 #39 NEW cov: 12234 ft: 15017 corp: 17/421b lim: 45 exec/s: 0 rss: 72Mb L: 30/43 MS: 1 InsertRepeatedBytes- 00:07:31.097 [2024-07-25 09:22:43.711235] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffff0aff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.097 [2024-07-25 09:22:43.711257] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:31.097 [2024-07-25 09:22:43.711308] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.097 [2024-07-25 09:22:43.711318] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:31.097 [2024-07-25 09:22:43.711367] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.097 [2024-07-25 09:22:43.711378] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:31.097 #40 NEW cov: 12234 ft: 15070 corp: 18/448b lim: 45 exec/s: 0 rss: 72Mb L: 27/43 MS: 1 CopyPart- 00:07:31.097 [2024-07-25 09:22:43.761552] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.097 [2024-07-25 09:22:43.761574] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:31.097 [2024-07-25 09:22:43.761624] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.097 [2024-07-25 09:22:43.761635] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:31.097 [2024-07-25 09:22:43.761684] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.097 [2024-07-25 09:22:43.761695] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:31.097 [2024-07-25 09:22:43.761743] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.097 [2024-07-25 09:22:43.761753] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:31.097 NEW_FUNC[1/1]: 0x1a8a050 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:07:31.097 #46 NEW cov: 12257 ft: 15144 corp: 19/491b lim: 45 exec/s: 0 rss: 72Mb L: 43/43 MS: 1 CopyPart- 00:07:31.097 [2024-07-25 09:22:43.801183] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:0000007a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.097 [2024-07-25 09:22:43.801208] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:31.097 #47 NEW cov: 12257 ft: 15145 corp: 20/507b lim: 45 exec/s: 0 rss: 72Mb L: 16/43 MS: 1 ChangeByte- 00:07:31.097 [2024-07-25 09:22:43.841315] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.097 [2024-07-25 09:22:43.841338] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:31.097 #48 NEW cov: 12257 ft: 15152 corp: 21/524b lim: 45 exec/s: 48 rss: 72Mb L: 17/43 MS: 1 EraseBytes- 00:07:31.097 [2024-07-25 09:22:43.881435] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.097 [2024-07-25 09:22:43.881457] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:31.097 #49 NEW cov: 12257 ft: 15160 corp: 22/538b lim: 45 exec/s: 49 rss: 72Mb L: 14/43 MS: 1 EraseBytes- 00:07:31.356 [2024-07-25 09:22:43.921835] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:005c0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.356 [2024-07-25 09:22:43.921858] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:31.356 [2024-07-25 09:22:43.921904] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:17004dac cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.356 [2024-07-25 09:22:43.921915] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:31.356 [2024-07-25 09:22:43.921962] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.356 [2024-07-25 09:22:43.921974] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:31.356 #50 NEW cov: 12257 ft: 15195 corp: 23/566b lim: 45 exec/s: 50 rss: 72Mb L: 28/43 MS: 1 CMP- DE: "\\\366\321\016M\254\027\000"- 00:07:31.356 [2024-07-25 09:22:43.972006] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.356 [2024-07-25 09:22:43.972027] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:31.356 [2024-07-25 09:22:43.972078] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.356 [2024-07-25 09:22:43.972089] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:31.356 [2024-07-25 09:22:43.972137] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:5cf60000 cdw11:d10e0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.356 [2024-07-25 09:22:43.972148] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:31.356 #51 NEW cov: 12257 ft: 15208 corp: 24/599b lim: 45 exec/s: 51 rss: 72Mb L: 33/43 MS: 1 PersAutoDict- DE: "\\\366\321\016M\254\027\000"- 00:07:31.356 [2024-07-25 09:22:44.022133] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:005c0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.356 [2024-07-25 09:22:44.022156] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:31.356 [2024-07-25 09:22:44.022203] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:17004dac cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.356 [2024-07-25 09:22:44.022214] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:31.356 [2024-07-25 09:22:44.022263] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.356 [2024-07-25 09:22:44.022275] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:31.356 #52 NEW cov: 12257 ft: 15239 corp: 25/627b lim: 45 exec/s: 52 rss: 72Mb L: 28/43 MS: 1 CrossOver- 00:07:31.356 [2024-07-25 09:22:44.072132] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.356 [2024-07-25 09:22:44.072155] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:31.356 [2024-07-25 09:22:44.072205] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.356 [2024-07-25 09:22:44.072216] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:31.356 #53 NEW cov: 12257 ft: 15299 corp: 26/647b lim: 45 exec/s: 53 rss: 72Mb L: 20/43 MS: 1 CopyPart- 00:07:31.356 [2024-07-25 09:22:44.112104] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:2c000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.356 [2024-07-25 09:22:44.112126] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:31.356 #54 NEW cov: 12257 ft: 15363 corp: 27/664b lim: 45 exec/s: 54 rss: 72Mb L: 17/43 MS: 1 InsertByte- 00:07:31.356 [2024-07-25 09:22:44.162747] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.356 [2024-07-25 09:22:44.162770] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:31.356 [2024-07-25 09:22:44.162817] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:d9d90000 cdw11:d9d90000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.356 [2024-07-25 09:22:44.162828] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:31.356 [2024-07-25 09:22:44.162877] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.356 [2024-07-25 09:22:44.162890] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:31.356 [2024-07-25 09:22:44.162937] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.356 [2024-07-25 09:22:44.162948] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:31.614 #55 NEW cov: 12257 ft: 15386 corp: 28/702b lim: 45 exec/s: 55 rss: 72Mb L: 38/43 MS: 1 InsertRepeatedBytes- 00:07:31.614 [2024-07-25 09:22:44.202508] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:2cff0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.614 [2024-07-25 09:22:44.202530] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:31.614 [2024-07-25 09:22:44.202579] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.614 [2024-07-25 09:22:44.202589] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:31.614 #56 NEW cov: 12257 ft: 15388 corp: 29/721b lim: 45 exec/s: 56 rss: 72Mb L: 19/43 MS: 1 CMP- DE: "\377\036"- 00:07:31.614 [2024-07-25 09:22:44.252832] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.614 [2024-07-25 09:22:44.252854] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:31.614 [2024-07-25 09:22:44.252902] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.614 [2024-07-25 09:22:44.252913] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:31.614 [2024-07-25 09:22:44.252960] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:5cf60000 cdw11:d10e0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.614 [2024-07-25 09:22:44.252971] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:31.614 #57 NEW cov: 12257 ft: 15422 corp: 30/754b lim: 45 exec/s: 57 rss: 72Mb L: 33/43 MS: 1 ChangeBit- 00:07:31.614 [2024-07-25 09:22:44.302636] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.614 [2024-07-25 09:22:44.302659] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:31.614 #58 NEW cov: 12257 ft: 15431 corp: 31/770b lim: 45 exec/s: 58 rss: 72Mb L: 16/43 MS: 1 ChangeBit- 00:07:31.614 [2024-07-25 09:22:44.343267] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.614 [2024-07-25 09:22:44.343290] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:31.614 [2024-07-25 09:22:44.343339] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.614 [2024-07-25 09:22:44.343350] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:31.614 [2024-07-25 09:22:44.343399] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:0000d900 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.614 [2024-07-25 09:22:44.343410] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:31.614 [2024-07-25 09:22:44.343459] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.614 [2024-07-25 09:22:44.343469] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:31.614 #59 NEW cov: 12257 ft: 15441 corp: 32/813b lim: 45 exec/s: 59 rss: 72Mb L: 43/43 MS: 1 CrossOver- 00:07:31.614 [2024-07-25 09:22:44.382839] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.614 [2024-07-25 09:22:44.382862] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:31.614 #60 NEW cov: 12257 ft: 15459 corp: 33/830b lim: 45 exec/s: 60 rss: 72Mb L: 17/43 MS: 1 CrossOver- 00:07:31.872 [2024-07-25 09:22:44.433306] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.872 [2024-07-25 09:22:44.433328] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:31.872 [2024-07-25 09:22:44.433376] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.872 [2024-07-25 09:22:44.433387] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:31.872 [2024-07-25 09:22:44.433437] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:5cf60000 cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.872 [2024-07-25 09:22:44.433448] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:31.872 [2024-07-25 09:22:44.473449] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.872 [2024-07-25 09:22:44.473472] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:31.872 [2024-07-25 09:22:44.473521] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.872 [2024-07-25 09:22:44.473532] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:31.872 [2024-07-25 09:22:44.473578] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:5cf60000 cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.872 [2024-07-25 09:22:44.473589] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:31.872 #62 NEW cov: 12257 ft: 15501 corp: 34/863b lim: 45 exec/s: 62 rss: 72Mb L: 33/43 MS: 2 CMP-ShuffleBytes- DE: "\377\377~\323\340\025\200\343"- 00:07:31.872 [2024-07-25 09:22:44.513396] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.872 [2024-07-25 09:22:44.513419] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:31.872 [2024-07-25 09:22:44.513472] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.872 [2024-07-25 09:22:44.513484] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:31.872 #63 NEW cov: 12257 ft: 15519 corp: 35/889b lim: 45 exec/s: 63 rss: 72Mb L: 26/43 MS: 1 InsertByte- 00:07:31.872 [2024-07-25 09:22:44.553823] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.872 [2024-07-25 09:22:44.553846] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:31.872 [2024-07-25 09:22:44.553899] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.872 [2024-07-25 09:22:44.553909] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:31.872 [2024-07-25 09:22:44.553957] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.872 [2024-07-25 09:22:44.553969] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:31.872 [2024-07-25 09:22:44.554014] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.872 [2024-07-25 09:22:44.554025] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:31.872 #64 NEW cov: 12257 ft: 15528 corp: 36/933b lim: 45 exec/s: 64 rss: 72Mb L: 44/44 MS: 1 InsertRepeatedBytes- 00:07:31.872 [2024-07-25 09:22:44.603610] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:e9ffff1e cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.872 [2024-07-25 09:22:44.603632] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:31.872 [2024-07-25 09:22:44.603685] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.872 [2024-07-25 09:22:44.603696] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:31.872 #67 NEW cov: 12257 ft: 15532 corp: 37/953b lim: 45 exec/s: 67 rss: 72Mb L: 20/44 MS: 3 CrossOver-PersAutoDict-InsertRepeatedBytes- DE: "\377\036"- 00:07:31.872 [2024-07-25 09:22:44.643763] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.872 [2024-07-25 09:22:44.643787] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:31.872 [2024-07-25 09:22:44.643837] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.872 [2024-07-25 09:22:44.643848] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:31.872 #68 NEW cov: 12257 ft: 15537 corp: 38/979b lim: 45 exec/s: 68 rss: 72Mb L: 26/44 MS: 1 ChangeByte- 00:07:32.131 [2024-07-25 09:22:44.683949] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.131 [2024-07-25 09:22:44.683974] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:32.131 [2024-07-25 09:22:44.684026] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.131 [2024-07-25 09:22:44.684037] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:32.131 #69 NEW cov: 12257 ft: 15549 corp: 39/999b lim: 45 exec/s: 69 rss: 73Mb L: 20/44 MS: 1 CopyPart- 00:07:32.131 [2024-07-25 09:22:44.734185] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.131 [2024-07-25 09:22:44.734207] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:32.131 [2024-07-25 09:22:44.734258] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.131 [2024-07-25 09:22:44.734269] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:32.131 [2024-07-25 09:22:44.734318] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:00ff0000 cdw11:ff7e0006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.131 [2024-07-25 09:22:44.734330] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:32.131 #70 NEW cov: 12257 ft: 15556 corp: 40/1032b lim: 45 exec/s: 70 rss: 73Mb L: 33/44 MS: 1 PersAutoDict- DE: "\377\377~\323\340\025\200\343"- 00:07:32.131 [2024-07-25 09:22:44.783955] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:0000007a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.131 [2024-07-25 09:22:44.783978] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:32.131 #71 NEW cov: 12257 ft: 15559 corp: 41/1048b lim: 45 exec/s: 71 rss: 73Mb L: 16/44 MS: 1 ShuffleBytes- 00:07:32.131 [2024-07-25 09:22:44.834624] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.131 [2024-07-25 09:22:44.834647] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:32.131 [2024-07-25 09:22:44.834700] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:d9d90000 cdw11:d9d90000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.131 [2024-07-25 09:22:44.834710] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:32.131 [2024-07-25 09:22:44.834757] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.131 [2024-07-25 09:22:44.834767] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:32.131 [2024-07-25 09:22:44.834815] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.131 [2024-07-25 09:22:44.834824] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:32.131 #72 NEW cov: 12257 ft: 15560 corp: 42/1086b lim: 45 exec/s: 36 rss: 73Mb L: 38/44 MS: 1 ChangeBit- 00:07:32.131 #72 DONE cov: 12257 ft: 15560 corp: 42/1086b lim: 45 exec/s: 36 rss: 73Mb 00:07:32.131 ###### Recommended dictionary. ###### 00:07:32.131 "\000\027\254Q\005\300\240\230" # Uses: 0 00:07:32.131 "\\\366\321\016M\254\027\000" # Uses: 1 00:07:32.131 "\377\036" # Uses: 1 00:07:32.131 "\377\377~\323\340\025\200\343" # Uses: 1 00:07:32.131 ###### End of recommended dictionary. ###### 00:07:32.131 Done 72 runs in 2 second(s) 00:07:32.390 09:22:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_5.conf /var/tmp/suppress_nvmf_fuzz 00:07:32.390 09:22:44 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:32.390 09:22:44 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:32.390 09:22:44 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 6 1 0x1 00:07:32.390 09:22:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=6 00:07:32.390 09:22:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:32.390 09:22:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:32.390 09:22:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 00:07:32.390 09:22:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_6.conf 00:07:32.390 09:22:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:32.390 09:22:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:32.390 09:22:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 6 00:07:32.390 09:22:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4406 00:07:32.390 09:22:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 00:07:32.390 09:22:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4406' 00:07:32.390 09:22:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4406"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:32.390 09:22:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:32.390 09:22:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:32.390 09:22:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4406' -c /tmp/fuzz_json_6.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 -Z 6 00:07:32.390 [2024-07-25 09:22:45.020224] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:32.390 [2024-07-25 09:22:45.020283] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid418136 ] 00:07:32.390 EAL: No free 2048 kB hugepages reported on node 1 00:07:32.390 [2024-07-25 09:22:45.183465] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.647 [2024-07-25 09:22:45.248676] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.647 [2024-07-25 09:22:45.307092] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:32.647 [2024-07-25 09:22:45.323318] tcp.c:1007:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4406 *** 00:07:32.647 INFO: Running with entropic power schedule (0xFF, 100). 00:07:32.647 INFO: Seed: 679134686 00:07:32.647 INFO: Loaded 1 modules (359061 inline 8-bit counters): 359061 [0x29c768c, 0x2a1f121), 00:07:32.647 INFO: Loaded 1 PC tables (359061 PCs): 359061 [0x2a1f128,0x2f99a78), 00:07:32.647 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 00:07:32.647 INFO: A corpus is not provided, starting from an empty corpus 00:07:32.647 #2 INITED exec/s: 0 rss: 63Mb 00:07:32.647 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:32.647 This may also happen if the target rejected all inputs we tried so far 00:07:32.647 [2024-07-25 09:22:45.372497] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:07:32.647 [2024-07-25 09:22:45.372523] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:32.906 NEW_FUNC[1/699]: 0x48e990 in fuzz_admin_delete_io_completion_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:161 00:07:32.906 NEW_FUNC[2/699]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:32.906 #3 NEW cov: 11947 ft: 11945 corp: 2/3b lim: 10 exec/s: 0 rss: 71Mb L: 2/2 MS: 1 CopyPart- 00:07:32.906 [2024-07-25 09:22:45.513249] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000fe03 cdw11:00000000 00:07:32.906 [2024-07-25 09:22:45.513282] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:32.906 [2024-07-25 09:22:45.513334] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:07:32.906 [2024-07-25 09:22:45.513346] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:32.906 [2024-07-25 09:22:45.513395] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:07:32.906 [2024-07-25 09:22:45.513407] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:32.906 [2024-07-25 09:22:45.513456] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:07:32.906 [2024-07-25 09:22:45.513466] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:32.906 #7 NEW cov: 12060 ft: 12666 corp: 3/12b lim: 10 exec/s: 0 rss: 71Mb L: 9/9 MS: 4 ChangeByte-CopyPart-CopyPart-CMP- DE: "\376\003\000\000\000\000\000\000"- 00:07:32.906 [2024-07-25 09:22:45.552914] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:07:32.906 [2024-07-25 09:22:45.552938] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:32.906 #8 NEW cov: 12066 ft: 13015 corp: 4/14b lim: 10 exec/s: 0 rss: 71Mb L: 2/9 MS: 1 CopyPart- 00:07:32.906 [2024-07-25 09:22:45.593009] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000eef5 cdw11:00000000 00:07:32.906 [2024-07-25 09:22:45.593031] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:32.906 #9 NEW cov: 12151 ft: 13304 corp: 5/16b lim: 10 exec/s: 0 rss: 72Mb L: 2/9 MS: 1 ChangeBinInt- 00:07:32.906 [2024-07-25 09:22:45.643512] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000afe cdw11:00000000 00:07:32.906 [2024-07-25 09:22:45.643535] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:32.906 [2024-07-25 09:22:45.643584] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000300 cdw11:00000000 00:07:32.906 [2024-07-25 09:22:45.643595] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:32.906 [2024-07-25 09:22:45.643645] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:07:32.906 [2024-07-25 09:22:45.643656] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:32.906 [2024-07-25 09:22:45.643706] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:07:32.906 [2024-07-25 09:22:45.643715] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:32.906 #10 NEW cov: 12151 ft: 13366 corp: 6/25b lim: 10 exec/s: 0 rss: 72Mb L: 9/9 MS: 1 PersAutoDict- DE: "\376\003\000\000\000\000\000\000"- 00:07:32.907 [2024-07-25 09:22:45.683504] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 00:07:32.907 [2024-07-25 09:22:45.683526] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:32.907 [2024-07-25 09:22:45.683576] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:07:32.907 [2024-07-25 09:22:45.683587] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:32.907 [2024-07-25 09:22:45.683634] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000000a cdw11:00000000 00:07:32.907 [2024-07-25 09:22:45.683645] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:32.907 #11 NEW cov: 12151 ft: 13555 corp: 7/31b lim: 10 exec/s: 0 rss: 72Mb L: 6/9 MS: 1 InsertRepeatedBytes- 00:07:33.166 [2024-07-25 09:22:45.723767] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000afe cdw11:00000000 00:07:33.166 [2024-07-25 09:22:45.723790] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.166 [2024-07-25 09:22:45.723840] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000300 cdw11:00000000 00:07:33.166 [2024-07-25 09:22:45.723851] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:33.166 [2024-07-25 09:22:45.723901] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:07:33.166 [2024-07-25 09:22:45.723912] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:33.166 [2024-07-25 09:22:45.723961] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:07:33.166 [2024-07-25 09:22:45.723971] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:33.166 #12 NEW cov: 12151 ft: 13678 corp: 8/40b lim: 10 exec/s: 0 rss: 72Mb L: 9/9 MS: 1 PersAutoDict- DE: "\376\003\000\000\000\000\000\000"- 00:07:33.166 [2024-07-25 09:22:45.763756] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a01 cdw11:00000000 00:07:33.166 [2024-07-25 09:22:45.763778] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.166 [2024-07-25 09:22:45.763829] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:07:33.166 [2024-07-25 09:22:45.763840] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:33.166 [2024-07-25 09:22:45.763890] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000000a cdw11:00000000 00:07:33.166 [2024-07-25 09:22:45.763901] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:33.166 #13 NEW cov: 12151 ft: 13707 corp: 9/46b lim: 10 exec/s: 0 rss: 72Mb L: 6/9 MS: 1 ChangeBinInt- 00:07:33.166 [2024-07-25 09:22:45.814126] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000afe cdw11:00000000 00:07:33.166 [2024-07-25 09:22:45.814149] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.166 [2024-07-25 09:22:45.814198] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000300 cdw11:00000000 00:07:33.166 [2024-07-25 09:22:45.814209] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:33.166 [2024-07-25 09:22:45.814259] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:07:33.166 [2024-07-25 09:22:45.814271] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:33.166 [2024-07-25 09:22:45.814316] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:07:33.166 [2024-07-25 09:22:45.814327] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:33.166 [2024-07-25 09:22:45.814377] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:0000000a cdw11:00000000 00:07:33.166 [2024-07-25 09:22:45.814388] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:33.166 #14 NEW cov: 12151 ft: 13809 corp: 10/56b lim: 10 exec/s: 0 rss: 72Mb L: 10/10 MS: 1 PersAutoDict- DE: "\376\003\000\000\000\000\000\000"- 00:07:33.166 [2024-07-25 09:22:45.853720] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:07:33.166 [2024-07-25 09:22:45.853744] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.166 #15 NEW cov: 12151 ft: 13893 corp: 11/58b lim: 10 exec/s: 0 rss: 72Mb L: 2/10 MS: 1 CopyPart- 00:07:33.166 [2024-07-25 09:22:45.894076] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 00:07:33.166 [2024-07-25 09:22:45.894099] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.166 [2024-07-25 09:22:45.894148] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:07:33.166 [2024-07-25 09:22:45.894159] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:33.166 [2024-07-25 09:22:45.894208] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000000a cdw11:00000000 00:07:33.166 [2024-07-25 09:22:45.894219] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:33.166 #16 NEW cov: 12151 ft: 13928 corp: 12/64b lim: 10 exec/s: 0 rss: 72Mb L: 6/10 MS: 1 ShuffleBytes- 00:07:33.166 [2024-07-25 09:22:45.934409] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000afe cdw11:00000000 00:07:33.166 [2024-07-25 09:22:45.934433] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.166 [2024-07-25 09:22:45.934486] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000300 cdw11:00000000 00:07:33.166 [2024-07-25 09:22:45.934498] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:33.166 [2024-07-25 09:22:45.934547] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:07:33.166 [2024-07-25 09:22:45.934557] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:33.166 [2024-07-25 09:22:45.934606] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:07:33.166 [2024-07-25 09:22:45.934617] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:33.166 [2024-07-25 09:22:45.934666] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:00000a0a cdw11:00000000 00:07:33.166 [2024-07-25 09:22:45.934676] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:33.166 #17 NEW cov: 12151 ft: 13975 corp: 13/74b lim: 10 exec/s: 0 rss: 72Mb L: 10/10 MS: 1 CrossOver- 00:07:33.425 [2024-07-25 09:22:45.984375] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000eef5 cdw11:00000000 00:07:33.425 [2024-07-25 09:22:45.984399] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.426 [2024-07-25 09:22:45.984450] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:07:33.426 [2024-07-25 09:22:45.984462] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:33.426 [2024-07-25 09:22:45.984510] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000002 cdw11:00000000 00:07:33.426 [2024-07-25 09:22:45.984520] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:33.426 #18 NEW cov: 12151 ft: 14004 corp: 14/80b lim: 10 exec/s: 0 rss: 72Mb L: 6/10 MS: 1 CMP- DE: "\000\000\000\002"- 00:07:33.426 [2024-07-25 09:22:46.034719] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000afe cdw11:00000000 00:07:33.426 [2024-07-25 09:22:46.034743] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.426 [2024-07-25 09:22:46.034793] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:07:33.426 [2024-07-25 09:22:46.034804] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:33.426 [2024-07-25 09:22:46.034856] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000003 cdw11:00000000 00:07:33.426 [2024-07-25 09:22:46.034867] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:33.426 [2024-07-25 09:22:46.034916] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:07:33.426 [2024-07-25 09:22:46.034926] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:33.426 [2024-07-25 09:22:46.034975] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:00000a0a cdw11:00000000 00:07:33.426 [2024-07-25 09:22:46.034985] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:33.426 #19 NEW cov: 12151 ft: 14030 corp: 15/90b lim: 10 exec/s: 0 rss: 72Mb L: 10/10 MS: 1 ShuffleBytes- 00:07:33.426 [2024-07-25 09:22:46.084518] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000eef5 cdw11:00000000 00:07:33.426 [2024-07-25 09:22:46.084544] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.426 [2024-07-25 09:22:46.084593] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:07:33.426 [2024-07-25 09:22:46.084605] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:33.426 #20 NEW cov: 12151 ft: 14177 corp: 16/95b lim: 10 exec/s: 0 rss: 72Mb L: 5/10 MS: 1 EraseBytes- 00:07:33.426 [2024-07-25 09:22:46.135035] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000afe cdw11:00000000 00:07:33.426 [2024-07-25 09:22:46.135058] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.426 [2024-07-25 09:22:46.135112] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000300 cdw11:00000000 00:07:33.426 [2024-07-25 09:22:46.135124] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:33.426 [2024-07-25 09:22:46.135173] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000a0a cdw11:00000000 00:07:33.426 [2024-07-25 09:22:46.135185] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:33.426 [2024-07-25 09:22:46.135235] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:07:33.426 [2024-07-25 09:22:46.135247] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:33.426 [2024-07-25 09:22:46.135297] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 00:07:33.426 [2024-07-25 09:22:46.135307] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:33.426 #21 NEW cov: 12151 ft: 14198 corp: 17/105b lim: 10 exec/s: 0 rss: 72Mb L: 10/10 MS: 1 CrossOver- 00:07:33.426 [2024-07-25 09:22:46.174833] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00001b00 cdw11:00000000 00:07:33.426 [2024-07-25 09:22:46.174856] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.426 [2024-07-25 09:22:46.174906] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000eef5 cdw11:00000000 00:07:33.426 [2024-07-25 09:22:46.174916] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:33.426 #22 NEW cov: 12151 ft: 14211 corp: 18/109b lim: 10 exec/s: 0 rss: 72Mb L: 4/10 MS: 1 CMP- DE: "\033\000"- 00:07:33.426 [2024-07-25 09:22:46.215045] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000afe cdw11:00000000 00:07:33.426 [2024-07-25 09:22:46.215072] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.426 [2024-07-25 09:22:46.215122] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000300 cdw11:00000000 00:07:33.426 [2024-07-25 09:22:46.215134] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:33.426 [2024-07-25 09:22:46.215184] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000000a cdw11:00000000 00:07:33.426 [2024-07-25 09:22:46.215194] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:33.685 #23 NEW cov: 12151 ft: 14243 corp: 19/116b lim: 10 exec/s: 0 rss: 72Mb L: 7/10 MS: 1 EraseBytes- 00:07:33.685 [2024-07-25 09:22:46.254896] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:07:33.685 [2024-07-25 09:22:46.254920] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.685 NEW_FUNC[1/1]: 0x1a8a050 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:07:33.685 #24 NEW cov: 12174 ft: 14288 corp: 20/118b lim: 10 exec/s: 0 rss: 72Mb L: 2/10 MS: 1 ShuffleBytes- 00:07:33.685 [2024-07-25 09:22:46.295397] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000afe cdw11:00000000 00:07:33.685 [2024-07-25 09:22:46.295421] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.685 [2024-07-25 09:22:46.295471] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000300 cdw11:00000000 00:07:33.685 [2024-07-25 09:22:46.295481] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:33.685 [2024-07-25 09:22:46.295531] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:07:33.685 [2024-07-25 09:22:46.295542] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:33.685 [2024-07-25 09:22:46.295589] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00001000 cdw11:00000000 00:07:33.685 [2024-07-25 09:22:46.295600] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:33.685 #25 NEW cov: 12174 ft: 14333 corp: 21/127b lim: 10 exec/s: 0 rss: 72Mb L: 9/10 MS: 1 ChangeBit- 00:07:33.685 [2024-07-25 09:22:46.345441] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a01 cdw11:00000000 00:07:33.685 [2024-07-25 09:22:46.345464] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.685 [2024-07-25 09:22:46.345514] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:07:33.685 [2024-07-25 09:22:46.345525] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:33.685 [2024-07-25 09:22:46.345575] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:000000ff cdw11:00000000 00:07:33.685 [2024-07-25 09:22:46.345586] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:33.685 #26 NEW cov: 12174 ft: 14382 corp: 22/133b lim: 10 exec/s: 26 rss: 72Mb L: 6/10 MS: 1 ChangeBinInt- 00:07:33.685 [2024-07-25 09:22:46.395819] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000003 cdw11:00000000 00:07:33.685 [2024-07-25 09:22:46.395841] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.685 [2024-07-25 09:22:46.395891] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:000000fe cdw11:00000000 00:07:33.685 [2024-07-25 09:22:46.395903] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:33.685 [2024-07-25 09:22:46.395953] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000a00 cdw11:00000000 00:07:33.685 [2024-07-25 09:22:46.395964] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:33.685 [2024-07-25 09:22:46.396031] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:07:33.685 [2024-07-25 09:22:46.396041] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:33.685 [2024-07-25 09:22:46.396100] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:0000000a cdw11:00000000 00:07:33.685 [2024-07-25 09:22:46.396111] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:33.685 #27 NEW cov: 12174 ft: 14412 corp: 23/143b lim: 10 exec/s: 27 rss: 72Mb L: 10/10 MS: 1 ShuffleBytes- 00:07:33.685 [2024-07-25 09:22:46.435457] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a5d cdw11:00000000 00:07:33.685 [2024-07-25 09:22:46.435480] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.685 #28 NEW cov: 12174 ft: 14416 corp: 24/146b lim: 10 exec/s: 28 rss: 72Mb L: 3/10 MS: 1 InsertByte- 00:07:33.685 [2024-07-25 09:22:46.476032] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 00:07:33.685 [2024-07-25 09:22:46.476055] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.685 [2024-07-25 09:22:46.476109] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000fe03 cdw11:00000000 00:07:33.685 [2024-07-25 09:22:46.476120] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:33.685 [2024-07-25 09:22:46.476171] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:07:33.685 [2024-07-25 09:22:46.476183] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:33.685 [2024-07-25 09:22:46.476234] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:07:33.685 [2024-07-25 09:22:46.476244] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:33.685 [2024-07-25 09:22:46.476294] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 00:07:33.685 [2024-07-25 09:22:46.476306] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:33.944 #29 NEW cov: 12174 ft: 14435 corp: 25/156b lim: 10 exec/s: 29 rss: 73Mb L: 10/10 MS: 1 CopyPart- 00:07:33.944 [2024-07-25 09:22:46.526198] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000afe cdw11:00000000 00:07:33.944 [2024-07-25 09:22:46.526221] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.944 [2024-07-25 09:22:46.526270] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000300 cdw11:00000000 00:07:33.944 [2024-07-25 09:22:46.526281] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:33.944 [2024-07-25 09:22:46.526331] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000003 cdw11:00000000 00:07:33.944 [2024-07-25 09:22:46.526342] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:33.944 [2024-07-25 09:22:46.526392] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:07:33.944 [2024-07-25 09:22:46.526402] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:33.944 [2024-07-25 09:22:46.526449] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 00:07:33.944 [2024-07-25 09:22:46.526461] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:33.944 #30 NEW cov: 12174 ft: 14452 corp: 26/166b lim: 10 exec/s: 30 rss: 73Mb L: 10/10 MS: 1 CopyPart- 00:07:33.944 [2024-07-25 09:22:46.565836] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:07:33.944 [2024-07-25 09:22:46.565859] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.944 #31 NEW cov: 12174 ft: 14465 corp: 27/168b lim: 10 exec/s: 31 rss: 73Mb L: 2/10 MS: 1 CrossOver- 00:07:33.944 [2024-07-25 09:22:46.605923] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000ff0a cdw11:00000000 00:07:33.944 [2024-07-25 09:22:46.605946] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.944 #32 NEW cov: 12174 ft: 14535 corp: 28/170b lim: 10 exec/s: 32 rss: 73Mb L: 2/10 MS: 1 ChangeByte- 00:07:33.944 [2024-07-25 09:22:46.656356] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000eef5 cdw11:00000000 00:07:33.944 [2024-07-25 09:22:46.656378] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.944 [2024-07-25 09:22:46.656427] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000002f cdw11:00000000 00:07:33.944 [2024-07-25 09:22:46.656438] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:33.944 [2024-07-25 09:22:46.656489] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000002 cdw11:00000000 00:07:33.944 [2024-07-25 09:22:46.656501] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:33.944 #33 NEW cov: 12174 ft: 14539 corp: 29/176b lim: 10 exec/s: 33 rss: 73Mb L: 6/10 MS: 1 ChangeByte- 00:07:33.944 [2024-07-25 09:22:46.696225] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a9f cdw11:00000000 00:07:33.944 [2024-07-25 09:22:46.696248] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.944 #34 NEW cov: 12174 ft: 14547 corp: 30/178b lim: 10 exec/s: 34 rss: 73Mb L: 2/10 MS: 1 InsertByte- 00:07:33.944 [2024-07-25 09:22:46.736435] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00001b00 cdw11:00000000 00:07:33.945 [2024-07-25 09:22:46.736458] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.945 [2024-07-25 09:22:46.736508] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000eeff cdw11:00000000 00:07:33.945 [2024-07-25 09:22:46.736519] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:34.203 #35 NEW cov: 12174 ft: 14567 corp: 31/182b lim: 10 exec/s: 35 rss: 73Mb L: 4/10 MS: 1 ChangeBinInt- 00:07:34.203 [2024-07-25 09:22:46.786493] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a5d cdw11:00000000 00:07:34.203 [2024-07-25 09:22:46.786516] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:34.203 #36 NEW cov: 12174 ft: 14585 corp: 32/185b lim: 10 exec/s: 36 rss: 73Mb L: 3/10 MS: 1 ChangeBit- 00:07:34.203 [2024-07-25 09:22:46.836992] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000afe cdw11:00000000 00:07:34.203 [2024-07-25 09:22:46.837015] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:34.203 [2024-07-25 09:22:46.837066] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000031f cdw11:00000000 00:07:34.203 [2024-07-25 09:22:46.837083] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:34.203 [2024-07-25 09:22:46.837134] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:07:34.203 [2024-07-25 09:22:46.837148] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:34.203 [2024-07-25 09:22:46.837198] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:07:34.203 [2024-07-25 09:22:46.837208] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:34.203 #37 NEW cov: 12174 ft: 14592 corp: 33/194b lim: 10 exec/s: 37 rss: 73Mb L: 9/10 MS: 1 ChangeByte- 00:07:34.203 [2024-07-25 09:22:46.876717] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000e6f5 cdw11:00000000 00:07:34.203 [2024-07-25 09:22:46.876739] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:34.203 #38 NEW cov: 12174 ft: 14614 corp: 34/196b lim: 10 exec/s: 38 rss: 73Mb L: 2/10 MS: 1 ChangeBit- 00:07:34.203 [2024-07-25 09:22:46.916921] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000afe cdw11:00000000 00:07:34.203 [2024-07-25 09:22:46.916944] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:34.203 [2024-07-25 09:22:46.916996] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000031f cdw11:00000000 00:07:34.203 [2024-07-25 09:22:46.917008] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:34.203 #39 NEW cov: 12174 ft: 14671 corp: 35/201b lim: 10 exec/s: 39 rss: 73Mb L: 5/10 MS: 1 EraseBytes- 00:07:34.203 [2024-07-25 09:22:46.967511] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000afe cdw11:00000000 00:07:34.203 [2024-07-25 09:22:46.967533] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:34.203 [2024-07-25 09:22:46.967585] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000300 cdw11:00000000 00:07:34.203 [2024-07-25 09:22:46.967596] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:34.203 [2024-07-25 09:22:46.967644] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:07:34.203 [2024-07-25 09:22:46.967655] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:34.203 [2024-07-25 09:22:46.967706] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000300 cdw11:00000000 00:07:34.203 [2024-07-25 09:22:46.967716] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:34.203 [2024-07-25 09:22:46.967765] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 00:07:34.203 [2024-07-25 09:22:46.967776] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:34.203 #40 NEW cov: 12174 ft: 14680 corp: 36/211b lim: 10 exec/s: 40 rss: 74Mb L: 10/10 MS: 1 ShuffleBytes- 00:07:34.462 [2024-07-25 09:22:47.017500] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000afe cdw11:00000000 00:07:34.462 [2024-07-25 09:22:47.017523] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:34.462 [2024-07-25 09:22:47.017573] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000300 cdw11:00000000 00:07:34.462 [2024-07-25 09:22:47.017584] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:34.462 [2024-07-25 09:22:47.017635] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:07:34.462 [2024-07-25 09:22:47.017649] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:34.462 [2024-07-25 09:22:47.017697] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000400 cdw11:00000000 00:07:34.462 [2024-07-25 09:22:47.017707] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:34.462 #41 NEW cov: 12174 ft: 14715 corp: 37/220b lim: 10 exec/s: 41 rss: 74Mb L: 9/10 MS: 1 ChangeBit- 00:07:34.462 [2024-07-25 09:22:47.057742] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000afe cdw11:00000000 00:07:34.462 [2024-07-25 09:22:47.057765] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:34.462 [2024-07-25 09:22:47.057816] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000300 cdw11:00000000 00:07:34.462 [2024-07-25 09:22:47.057827] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:34.462 [2024-07-25 09:22:47.057877] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000003 cdw11:00000000 00:07:34.462 [2024-07-25 09:22:47.057888] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:34.462 [2024-07-25 09:22:47.057936] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:07:34.462 [2024-07-25 09:22:47.057946] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:34.462 [2024-07-25 09:22:47.057997] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:000000fe cdw11:00000000 00:07:34.462 [2024-07-25 09:22:47.058008] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:34.462 #42 NEW cov: 12174 ft: 14769 corp: 38/230b lim: 10 exec/s: 42 rss: 74Mb L: 10/10 MS: 1 CrossOver- 00:07:34.462 [2024-07-25 09:22:47.097686] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000afe cdw11:00000000 00:07:34.462 [2024-07-25 09:22:47.097710] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:34.462 [2024-07-25 09:22:47.097761] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000300 cdw11:00000000 00:07:34.462 [2024-07-25 09:22:47.097773] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:34.462 [2024-07-25 09:22:47.097823] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000010 cdw11:00000000 00:07:34.462 [2024-07-25 09:22:47.097834] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:34.462 [2024-07-25 09:22:47.097882] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:07:34.462 [2024-07-25 09:22:47.097893] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:34.462 #43 NEW cov: 12174 ft: 14787 corp: 39/239b lim: 10 exec/s: 43 rss: 74Mb L: 9/10 MS: 1 ShuffleBytes- 00:07:34.462 [2024-07-25 09:22:47.147967] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 00:07:34.462 [2024-07-25 09:22:47.147990] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:34.462 [2024-07-25 09:22:47.148041] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000fe00 cdw11:00000000 00:07:34.462 [2024-07-25 09:22:47.148052] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:34.462 [2024-07-25 09:22:47.148107] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000003 cdw11:00000000 00:07:34.462 [2024-07-25 09:22:47.148119] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:34.462 [2024-07-25 09:22:47.148167] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:07:34.462 [2024-07-25 09:22:47.148177] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:34.462 [2024-07-25 09:22:47.148225] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:00000a0a cdw11:00000000 00:07:34.462 [2024-07-25 09:22:47.148237] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:34.462 #44 NEW cov: 12174 ft: 14790 corp: 40/249b lim: 10 exec/s: 44 rss: 74Mb L: 10/10 MS: 1 ShuffleBytes- 00:07:34.462 [2024-07-25 09:22:47.197940] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:07:34.462 [2024-07-25 09:22:47.197964] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:34.462 [2024-07-25 09:22:47.198016] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000d8d8 cdw11:00000000 00:07:34.462 [2024-07-25 09:22:47.198027] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:34.462 [2024-07-25 09:22:47.198079] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000d8d8 cdw11:00000000 00:07:34.462 [2024-07-25 09:22:47.198090] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:34.462 #45 NEW cov: 12174 ft: 14801 corp: 41/255b lim: 10 exec/s: 45 rss: 74Mb L: 6/10 MS: 1 InsertRepeatedBytes- 00:07:34.462 [2024-07-25 09:22:47.248147] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:000008fe cdw11:00000000 00:07:34.463 [2024-07-25 09:22:47.248172] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:34.463 [2024-07-25 09:22:47.248221] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000300 cdw11:00000000 00:07:34.463 [2024-07-25 09:22:47.248232] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:34.463 [2024-07-25 09:22:47.248283] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000010 cdw11:00000000 00:07:34.463 [2024-07-25 09:22:47.248294] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:34.463 [2024-07-25 09:22:47.248345] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:07:34.463 [2024-07-25 09:22:47.248355] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:34.722 #46 NEW cov: 12174 ft: 14810 corp: 42/264b lim: 10 exec/s: 46 rss: 74Mb L: 9/10 MS: 1 ChangeBit- 00:07:34.722 [2024-07-25 09:22:47.298062] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000afe cdw11:00000000 00:07:34.722 [2024-07-25 09:22:47.298092] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:34.722 [2024-07-25 09:22:47.298144] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:07:34.722 [2024-07-25 09:22:47.298155] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:34.722 #47 NEW cov: 12174 ft: 14818 corp: 43/269b lim: 10 exec/s: 47 rss: 74Mb L: 5/10 MS: 1 EraseBytes- 00:07:34.722 [2024-07-25 09:22:47.338465] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a30 cdw11:00000000 00:07:34.722 [2024-07-25 09:22:47.338488] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:34.722 [2024-07-25 09:22:47.338539] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000fe03 cdw11:00000000 00:07:34.722 [2024-07-25 09:22:47.338551] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:34.722 [2024-07-25 09:22:47.338601] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:07:34.722 [2024-07-25 09:22:47.338612] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:34.722 [2024-07-25 09:22:47.338661] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000010 cdw11:00000000 00:07:34.722 [2024-07-25 09:22:47.338672] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:34.722 [2024-07-25 09:22:47.338724] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 00:07:34.722 [2024-07-25 09:22:47.338734] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:34.722 #48 NEW cov: 12174 ft: 14820 corp: 44/279b lim: 10 exec/s: 24 rss: 74Mb L: 10/10 MS: 1 InsertByte- 00:07:34.722 #48 DONE cov: 12174 ft: 14820 corp: 44/279b lim: 10 exec/s: 24 rss: 74Mb 00:07:34.722 ###### Recommended dictionary. ###### 00:07:34.722 "\376\003\000\000\000\000\000\000" # Uses: 3 00:07:34.722 "\000\000\000\002" # Uses: 0 00:07:34.722 "\033\000" # Uses: 0 00:07:34.722 ###### End of recommended dictionary. ###### 00:07:34.722 Done 48 runs in 2 second(s) 00:07:34.722 09:22:47 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_6.conf /var/tmp/suppress_nvmf_fuzz 00:07:34.722 09:22:47 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:34.722 09:22:47 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:34.722 09:22:47 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 7 1 0x1 00:07:34.722 09:22:47 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=7 00:07:34.722 09:22:47 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:34.722 09:22:47 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:34.722 09:22:47 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 00:07:34.722 09:22:47 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_7.conf 00:07:34.722 09:22:47 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:34.722 09:22:47 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:34.722 09:22:47 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 7 00:07:34.722 09:22:47 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4407 00:07:34.722 09:22:47 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 00:07:34.722 09:22:47 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4407' 00:07:34.722 09:22:47 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4407"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:34.722 09:22:47 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:34.722 09:22:47 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:34.722 09:22:47 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4407' -c /tmp/fuzz_json_7.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 -Z 7 00:07:34.722 [2024-07-25 09:22:47.508749] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:34.722 [2024-07-25 09:22:47.508814] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid418566 ] 00:07:34.981 EAL: No free 2048 kB hugepages reported on node 1 00:07:34.981 [2024-07-25 09:22:47.675893] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.981 [2024-07-25 09:22:47.739958] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.240 [2024-07-25 09:22:47.798266] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:35.240 [2024-07-25 09:22:47.814518] tcp.c:1007:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4407 *** 00:07:35.240 INFO: Running with entropic power schedule (0xFF, 100). 00:07:35.240 INFO: Seed: 3171122435 00:07:35.240 INFO: Loaded 1 modules (359061 inline 8-bit counters): 359061 [0x29c768c, 0x2a1f121), 00:07:35.240 INFO: Loaded 1 PC tables (359061 PCs): 359061 [0x2a1f128,0x2f99a78), 00:07:35.240 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 00:07:35.240 INFO: A corpus is not provided, starting from an empty corpus 00:07:35.240 #2 INITED exec/s: 0 rss: 63Mb 00:07:35.240 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:35.240 This may also happen if the target rejected all inputs we tried so far 00:07:35.240 [2024-07-25 09:22:47.860260] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000278e cdw11:00000000 00:07:35.240 [2024-07-25 09:22:47.860286] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:35.240 [2024-07-25 09:22:47.860335] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000e28f cdw11:00000000 00:07:35.240 [2024-07-25 09:22:47.860346] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:35.240 [2024-07-25 09:22:47.860396] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00004eac cdw11:00000000 00:07:35.240 [2024-07-25 09:22:47.860407] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:35.240 [2024-07-25 09:22:47.860458] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00001700 cdw11:00000000 00:07:35.240 [2024-07-25 09:22:47.860469] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:35.240 NEW_FUNC[1/699]: 0x48f380 in fuzz_admin_delete_io_submission_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:172 00:07:35.240 NEW_FUNC[2/699]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:35.240 #3 NEW cov: 11947 ft: 11943 corp: 2/10b lim: 10 exec/s: 0 rss: 71Mb L: 9/9 MS: 1 CMP- DE: "'\216\342\217N\254\027\000"- 00:07:35.240 [2024-07-25 09:22:48.010634] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000278e cdw11:00000000 00:07:35.240 [2024-07-25 09:22:48.010673] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:35.240 [2024-07-25 09:22:48.010733] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000e28f cdw11:00000000 00:07:35.240 [2024-07-25 09:22:48.010750] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:35.240 [2024-07-25 09:22:48.010807] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00004e00 cdw11:00000000 00:07:35.240 [2024-07-25 09:22:48.010826] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:35.240 [2024-07-25 09:22:48.010883] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:07:35.240 [2024-07-25 09:22:48.010899] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:35.499 #4 NEW cov: 12060 ft: 12420 corp: 3/19b lim: 10 exec/s: 0 rss: 71Mb L: 9/9 MS: 1 CMP- DE: "\000\000\000\001"- 00:07:35.499 [2024-07-25 09:22:48.070482] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000278e cdw11:00000000 00:07:35.499 [2024-07-25 09:22:48.070507] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:35.499 [2024-07-25 09:22:48.070555] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000e28f cdw11:00000000 00:07:35.499 [2024-07-25 09:22:48.070566] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:35.499 [2024-07-25 09:22:48.070612] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00004e00 cdw11:00000000 00:07:35.499 [2024-07-25 09:22:48.070623] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:35.499 #5 NEW cov: 12066 ft: 12952 corp: 4/26b lim: 10 exec/s: 0 rss: 72Mb L: 7/9 MS: 1 EraseBytes- 00:07:35.499 [2024-07-25 09:22:48.110585] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00008e27 cdw11:00000000 00:07:35.499 [2024-07-25 09:22:48.110609] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:35.499 [2024-07-25 09:22:48.110657] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000e28f cdw11:00000000 00:07:35.499 [2024-07-25 09:22:48.110668] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:35.499 [2024-07-25 09:22:48.110714] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00004e00 cdw11:00000000 00:07:35.499 [2024-07-25 09:22:48.110725] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:35.499 #6 NEW cov: 12151 ft: 13232 corp: 5/33b lim: 10 exec/s: 0 rss: 72Mb L: 7/9 MS: 1 ShuffleBytes- 00:07:35.499 [2024-07-25 09:22:48.160804] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000eeee cdw11:00000000 00:07:35.499 [2024-07-25 09:22:48.160828] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:35.499 [2024-07-25 09:22:48.160875] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000eeee cdw11:00000000 00:07:35.499 [2024-07-25 09:22:48.160886] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:35.499 [2024-07-25 09:22:48.160934] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000eeee cdw11:00000000 00:07:35.499 [2024-07-25 09:22:48.160945] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:35.499 [2024-07-25 09:22:48.160991] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000ee0a cdw11:00000000 00:07:35.499 [2024-07-25 09:22:48.161001] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:35.499 #7 NEW cov: 12151 ft: 13347 corp: 6/41b lim: 10 exec/s: 0 rss: 72Mb L: 8/9 MS: 1 InsertRepeatedBytes- 00:07:35.499 [2024-07-25 09:22:48.200830] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000278e cdw11:00000000 00:07:35.499 [2024-07-25 09:22:48.200856] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:35.499 [2024-07-25 09:22:48.200902] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000e28f cdw11:00000000 00:07:35.499 [2024-07-25 09:22:48.200912] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:35.499 [2024-07-25 09:22:48.200958] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00004eac cdw11:00000000 00:07:35.499 [2024-07-25 09:22:48.200969] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:35.499 #8 NEW cov: 12151 ft: 13423 corp: 7/48b lim: 10 exec/s: 0 rss: 72Mb L: 7/9 MS: 1 EraseBytes- 00:07:35.499 [2024-07-25 09:22:48.240738] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00003126 cdw11:00000000 00:07:35.499 [2024-07-25 09:22:48.240760] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:35.499 #10 NEW cov: 12151 ft: 13714 corp: 8/50b lim: 10 exec/s: 0 rss: 72Mb L: 2/9 MS: 2 ChangeByte-InsertByte- 00:07:35.499 [2024-07-25 09:22:48.281270] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000eeee cdw11:00000000 00:07:35.499 [2024-07-25 09:22:48.281293] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:35.499 [2024-07-25 09:22:48.281340] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000eeee cdw11:00000000 00:07:35.499 [2024-07-25 09:22:48.281351] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:35.500 [2024-07-25 09:22:48.281398] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000eeee cdw11:00000000 00:07:35.500 [2024-07-25 09:22:48.281409] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:35.500 [2024-07-25 09:22:48.281456] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000eeee cdw11:00000000 00:07:35.500 [2024-07-25 09:22:48.281466] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:35.500 [2024-07-25 09:22:48.281512] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:0000ee0a cdw11:00000000 00:07:35.500 [2024-07-25 09:22:48.281523] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:35.758 #11 NEW cov: 12151 ft: 13819 corp: 9/60b lim: 10 exec/s: 0 rss: 72Mb L: 10/10 MS: 1 CopyPart- 00:07:35.758 [2024-07-25 09:22:48.331075] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000eeee cdw11:00000000 00:07:35.758 [2024-07-25 09:22:48.331098] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:35.758 [2024-07-25 09:22:48.331147] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ee0a cdw11:00000000 00:07:35.758 [2024-07-25 09:22:48.331158] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:35.758 #12 NEW cov: 12151 ft: 14027 corp: 10/64b lim: 10 exec/s: 0 rss: 72Mb L: 4/10 MS: 1 EraseBytes- 00:07:35.758 [2024-07-25 09:22:48.371441] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00008e27 cdw11:00000000 00:07:35.759 [2024-07-25 09:22:48.371465] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:35.759 [2024-07-25 09:22:48.371515] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000e28f cdw11:00000000 00:07:35.759 [2024-07-25 09:22:48.371529] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:35.759 [2024-07-25 09:22:48.371576] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00004e00 cdw11:00000000 00:07:35.759 [2024-07-25 09:22:48.371588] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:35.759 [2024-07-25 09:22:48.371633] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000a51 cdw11:00000000 00:07:35.759 [2024-07-25 09:22:48.371644] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:35.759 #13 NEW cov: 12151 ft: 14092 corp: 11/72b lim: 10 exec/s: 0 rss: 72Mb L: 8/10 MS: 1 InsertByte- 00:07:35.759 [2024-07-25 09:22:48.421593] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000078e cdw11:00000000 00:07:35.759 [2024-07-25 09:22:48.421616] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:35.759 [2024-07-25 09:22:48.421665] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000e28f cdw11:00000000 00:07:35.759 [2024-07-25 09:22:48.421676] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:35.759 [2024-07-25 09:22:48.421724] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00004eac cdw11:00000000 00:07:35.759 [2024-07-25 09:22:48.421735] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:35.759 [2024-07-25 09:22:48.421782] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00001700 cdw11:00000000 00:07:35.759 [2024-07-25 09:22:48.421792] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:35.759 #14 NEW cov: 12151 ft: 14127 corp: 12/81b lim: 10 exec/s: 0 rss: 72Mb L: 9/10 MS: 1 ChangeBit- 00:07:35.759 [2024-07-25 09:22:48.461689] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000eeee cdw11:00000000 00:07:35.759 [2024-07-25 09:22:48.461712] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:35.759 [2024-07-25 09:22:48.461761] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000eeee cdw11:00000000 00:07:35.759 [2024-07-25 09:22:48.461772] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:35.759 [2024-07-25 09:22:48.461820] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000eeee cdw11:00000000 00:07:35.759 [2024-07-25 09:22:48.461831] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:35.759 [2024-07-25 09:22:48.461877] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000eeee cdw11:00000000 00:07:35.759 [2024-07-25 09:22:48.461887] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:35.759 #15 NEW cov: 12151 ft: 14161 corp: 13/90b lim: 10 exec/s: 0 rss: 72Mb L: 9/10 MS: 1 CopyPart- 00:07:35.759 [2024-07-25 09:22:48.501842] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000318e cdw11:00000000 00:07:35.759 [2024-07-25 09:22:48.501867] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:35.759 [2024-07-25 09:22:48.501916] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:000027e2 cdw11:00000000 00:07:35.759 [2024-07-25 09:22:48.501930] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:35.759 [2024-07-25 09:22:48.501977] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00008f4e cdw11:00000000 00:07:35.759 [2024-07-25 09:22:48.501988] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:35.759 [2024-07-25 09:22:48.502034] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000000a cdw11:00000000 00:07:35.759 [2024-07-25 09:22:48.502045] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:35.759 #16 NEW cov: 12151 ft: 14180 corp: 14/98b lim: 10 exec/s: 0 rss: 72Mb L: 8/10 MS: 1 CrossOver- 00:07:35.759 [2024-07-25 09:22:48.541945] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00008e27 cdw11:00000000 00:07:35.759 [2024-07-25 09:22:48.541969] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:35.759 [2024-07-25 09:22:48.542018] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000e285 cdw11:00000000 00:07:35.759 [2024-07-25 09:22:48.542030] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:35.759 [2024-07-25 09:22:48.542079] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00004e00 cdw11:00000000 00:07:35.759 [2024-07-25 09:22:48.542090] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:35.759 [2024-07-25 09:22:48.542136] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000a51 cdw11:00000000 00:07:35.759 [2024-07-25 09:22:48.542147] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:36.018 #17 NEW cov: 12151 ft: 14233 corp: 15/106b lim: 10 exec/s: 0 rss: 72Mb L: 8/10 MS: 1 ChangeBinInt- 00:07:36.018 [2024-07-25 09:22:48.592183] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00008f4e cdw11:00000000 00:07:36.018 [2024-07-25 09:22:48.592207] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:36.018 [2024-07-25 09:22:48.592253] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ac27 cdw11:00000000 00:07:36.018 [2024-07-25 09:22:48.592264] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:36.018 [2024-07-25 09:22:48.592311] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00008ee2 cdw11:00000000 00:07:36.018 [2024-07-25 09:22:48.592322] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:36.018 [2024-07-25 09:22:48.592386] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00008f4e cdw11:00000000 00:07:36.018 [2024-07-25 09:22:48.592398] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:36.018 [2024-07-25 09:22:48.592444] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:0000ac17 cdw11:00000000 00:07:36.018 [2024-07-25 09:22:48.592454] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:36.018 #18 NEW cov: 12151 ft: 14277 corp: 16/116b lim: 10 exec/s: 0 rss: 72Mb L: 10/10 MS: 1 CopyPart- 00:07:36.018 [2024-07-25 09:22:48.642244] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000078e cdw11:00000000 00:07:36.018 [2024-07-25 09:22:48.642267] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:36.018 [2024-07-25 09:22:48.642320] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000e28f cdw11:00000000 00:07:36.018 [2024-07-25 09:22:48.642330] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:36.018 [2024-07-25 09:22:48.642377] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:000046ac cdw11:00000000 00:07:36.018 [2024-07-25 09:22:48.642388] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:36.018 [2024-07-25 09:22:48.642434] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00001700 cdw11:00000000 00:07:36.019 [2024-07-25 09:22:48.642445] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:36.019 #19 NEW cov: 12151 ft: 14305 corp: 17/125b lim: 10 exec/s: 0 rss: 72Mb L: 9/10 MS: 1 ChangeBinInt- 00:07:36.019 [2024-07-25 09:22:48.692219] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000278e cdw11:00000000 00:07:36.019 [2024-07-25 09:22:48.692243] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:36.019 [2024-07-25 09:22:48.692292] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000e28f cdw11:00000000 00:07:36.019 [2024-07-25 09:22:48.692303] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:36.019 [2024-07-25 09:22:48.692351] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000eeee cdw11:00000000 00:07:36.019 [2024-07-25 09:22:48.692363] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:36.019 #20 NEW cov: 12151 ft: 14338 corp: 18/132b lim: 10 exec/s: 0 rss: 72Mb L: 7/10 MS: 1 CrossOver- 00:07:36.019 [2024-07-25 09:22:48.732464] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00002700 cdw11:00000000 00:07:36.019 [2024-07-25 09:22:48.732487] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:36.019 [2024-07-25 09:22:48.732536] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000008e cdw11:00000000 00:07:36.019 [2024-07-25 09:22:48.732547] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:36.019 [2024-07-25 09:22:48.732595] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000004e cdw11:00000000 00:07:36.019 [2024-07-25 09:22:48.732606] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:36.019 [2024-07-25 09:22:48.732654] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00008fe2 cdw11:00000000 00:07:36.019 [2024-07-25 09:22:48.732664] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:36.019 NEW_FUNC[1/1]: 0x1a8a050 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:07:36.019 #21 NEW cov: 12174 ft: 14440 corp: 19/141b lim: 10 exec/s: 0 rss: 72Mb L: 9/10 MS: 1 ShuffleBytes- 00:07:36.019 [2024-07-25 09:22:48.782612] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00003126 cdw11:00000000 00:07:36.019 [2024-07-25 09:22:48.782636] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:36.019 [2024-07-25 09:22:48.782685] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:000027e2 cdw11:00000000 00:07:36.019 [2024-07-25 09:22:48.782696] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:36.019 [2024-07-25 09:22:48.782745] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00008f4e cdw11:00000000 00:07:36.019 [2024-07-25 09:22:48.782757] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:36.019 [2024-07-25 09:22:48.782803] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000000a cdw11:00000000 00:07:36.019 [2024-07-25 09:22:48.782813] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:36.019 #22 NEW cov: 12174 ft: 14463 corp: 20/149b lim: 10 exec/s: 0 rss: 72Mb L: 8/10 MS: 1 CrossOver- 00:07:36.278 [2024-07-25 09:22:48.832615] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000278e cdw11:00000000 00:07:36.278 [2024-07-25 09:22:48.832639] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:36.278 [2024-07-25 09:22:48.832690] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000e28f cdw11:00000000 00:07:36.278 [2024-07-25 09:22:48.832701] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:36.278 [2024-07-25 09:22:48.832748] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00004eac cdw11:00000000 00:07:36.278 [2024-07-25 09:22:48.832760] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:36.278 #23 NEW cov: 12174 ft: 14498 corp: 21/156b lim: 10 exec/s: 23 rss: 72Mb L: 7/10 MS: 1 CrossOver- 00:07:36.278 [2024-07-25 09:22:48.872857] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00008e27 cdw11:00000000 00:07:36.278 [2024-07-25 09:22:48.872881] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:36.278 [2024-07-25 09:22:48.872930] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00006285 cdw11:00000000 00:07:36.278 [2024-07-25 09:22:48.872940] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:36.278 [2024-07-25 09:22:48.872989] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00004e00 cdw11:00000000 00:07:36.278 [2024-07-25 09:22:48.873000] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:36.278 [2024-07-25 09:22:48.873047] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000a51 cdw11:00000000 00:07:36.278 [2024-07-25 09:22:48.873058] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:36.278 #24 NEW cov: 12174 ft: 14526 corp: 22/164b lim: 10 exec/s: 24 rss: 72Mb L: 8/10 MS: 1 ChangeBit- 00:07:36.278 [2024-07-25 09:22:48.923155] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ee00 cdw11:00000000 00:07:36.278 [2024-07-25 09:22:48.923178] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:36.278 [2024-07-25 09:22:48.923225] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:07:36.278 [2024-07-25 09:22:48.923236] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:36.278 [2024-07-25 09:22:48.923284] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000aee cdw11:00000000 00:07:36.278 [2024-07-25 09:22:48.923296] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:36.278 [2024-07-25 09:22:48.923342] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000eeee cdw11:00000000 00:07:36.278 [2024-07-25 09:22:48.923355] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:36.278 [2024-07-25 09:22:48.923401] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:0000ee0a cdw11:00000000 00:07:36.278 [2024-07-25 09:22:48.923412] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:36.278 #25 NEW cov: 12174 ft: 14554 corp: 23/174b lim: 10 exec/s: 25 rss: 73Mb L: 10/10 MS: 1 ChangeBinInt- 00:07:36.278 [2024-07-25 09:22:48.973002] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00008f4e cdw11:00000000 00:07:36.278 [2024-07-25 09:22:48.973025] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:36.278 [2024-07-25 09:22:48.973078] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00008e27 cdw11:00000000 00:07:36.278 [2024-07-25 09:22:48.973089] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:36.278 [2024-07-25 09:22:48.973136] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000e2ac cdw11:00000000 00:07:36.278 [2024-07-25 09:22:48.973147] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:36.278 #26 NEW cov: 12174 ft: 14599 corp: 24/181b lim: 10 exec/s: 26 rss: 73Mb L: 7/10 MS: 1 CrossOver- 00:07:36.278 [2024-07-25 09:22:49.023290] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000278e cdw11:00000000 00:07:36.278 [2024-07-25 09:22:49.023313] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:36.278 [2024-07-25 09:22:49.023360] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000e28f cdw11:00000000 00:07:36.278 [2024-07-25 09:22:49.023370] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:36.278 [2024-07-25 09:22:49.023418] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00004e00 cdw11:00000000 00:07:36.278 [2024-07-25 09:22:49.023429] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:36.278 [2024-07-25 09:22:49.023477] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000410a cdw11:00000000 00:07:36.278 [2024-07-25 09:22:49.023487] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:36.278 #27 NEW cov: 12174 ft: 14601 corp: 25/189b lim: 10 exec/s: 27 rss: 73Mb L: 8/10 MS: 1 InsertByte- 00:07:36.278 [2024-07-25 09:22:49.063426] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00009341 cdw11:00000000 00:07:36.278 [2024-07-25 09:22:49.063450] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:36.278 [2024-07-25 09:22:49.063498] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000c57c cdw11:00000000 00:07:36.278 [2024-07-25 09:22:49.063509] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:36.278 [2024-07-25 09:22:49.063557] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00004fac cdw11:00000000 00:07:36.278 [2024-07-25 09:22:49.063568] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:36.278 [2024-07-25 09:22:49.063617] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00001700 cdw11:00000000 00:07:36.278 [2024-07-25 09:22:49.063630] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:36.537 #28 NEW cov: 12174 ft: 14635 corp: 26/197b lim: 10 exec/s: 28 rss: 73Mb L: 8/10 MS: 1 CMP- DE: "\223A\305|O\254\027\000"- 00:07:36.537 [2024-07-25 09:22:49.113432] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00004ee2 cdw11:00000000 00:07:36.537 [2024-07-25 09:22:49.113457] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:36.537 [2024-07-25 09:22:49.113506] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00008f00 cdw11:00000000 00:07:36.537 [2024-07-25 09:22:49.113516] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:36.537 [2024-07-25 09:22:49.113563] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000a8e cdw11:00000000 00:07:36.537 [2024-07-25 09:22:49.113574] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:36.537 #29 NEW cov: 12174 ft: 14640 corp: 27/204b lim: 10 exec/s: 29 rss: 73Mb L: 7/10 MS: 1 ShuffleBytes- 00:07:36.537 [2024-07-25 09:22:49.153648] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00008e27 cdw11:00000000 00:07:36.537 [2024-07-25 09:22:49.153672] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:36.537 [2024-07-25 09:22:49.153721] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000e285 cdw11:00000000 00:07:36.538 [2024-07-25 09:22:49.153731] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:36.538 [2024-07-25 09:22:49.153780] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00002000 cdw11:00000000 00:07:36.538 [2024-07-25 09:22:49.153791] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:36.538 [2024-07-25 09:22:49.153836] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000a51 cdw11:00000000 00:07:36.538 [2024-07-25 09:22:49.153847] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:36.538 #30 NEW cov: 12174 ft: 14654 corp: 28/212b lim: 10 exec/s: 30 rss: 73Mb L: 8/10 MS: 1 CMP- DE: " \000"- 00:07:36.538 [2024-07-25 09:22:49.193755] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00002707 cdw11:00000000 00:07:36.538 [2024-07-25 09:22:49.193778] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:36.538 [2024-07-25 09:22:49.193826] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00008ee2 cdw11:00000000 00:07:36.538 [2024-07-25 09:22:49.193837] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:36.538 [2024-07-25 09:22:49.193886] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00008f46 cdw11:00000000 00:07:36.538 [2024-07-25 09:22:49.193897] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:36.538 [2024-07-25 09:22:49.193945] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000ac00 cdw11:00000000 00:07:36.538 [2024-07-25 09:22:49.193956] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:36.538 #31 NEW cov: 12174 ft: 14666 corp: 29/221b lim: 10 exec/s: 31 rss: 73Mb L: 9/10 MS: 1 CrossOver- 00:07:36.538 [2024-07-25 09:22:49.233495] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000eeee cdw11:00000000 00:07:36.538 [2024-07-25 09:22:49.233521] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:36.538 #32 NEW cov: 12174 ft: 14670 corp: 30/224b lim: 10 exec/s: 32 rss: 73Mb L: 3/10 MS: 1 EraseBytes- 00:07:36.538 [2024-07-25 09:22:49.284006] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000318e cdw11:00000000 00:07:36.538 [2024-07-25 09:22:49.284029] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:36.538 [2024-07-25 09:22:49.284078] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00008f27 cdw11:00000000 00:07:36.538 [2024-07-25 09:22:49.284089] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:36.538 [2024-07-25 09:22:49.284137] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000e200 cdw11:00000000 00:07:36.538 [2024-07-25 09:22:49.284149] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:36.538 [2024-07-25 09:22:49.284193] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00004e0a cdw11:00000000 00:07:36.538 [2024-07-25 09:22:49.284203] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:36.538 #33 NEW cov: 12174 ft: 14686 corp: 31/232b lim: 10 exec/s: 33 rss: 73Mb L: 8/10 MS: 1 ShuffleBytes- 00:07:36.538 [2024-07-25 09:22:49.323992] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00004ee2 cdw11:00000000 00:07:36.538 [2024-07-25 09:22:49.324015] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:36.538 [2024-07-25 09:22:49.324062] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00008f00 cdw11:00000000 00:07:36.538 [2024-07-25 09:22:49.324077] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:36.538 [2024-07-25 09:22:49.324126] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000a8e cdw11:00000000 00:07:36.538 [2024-07-25 09:22:49.324137] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:36.796 #34 NEW cov: 12174 ft: 14702 corp: 32/239b lim: 10 exec/s: 34 rss: 73Mb L: 7/10 MS: 1 CrossOver- 00:07:36.796 [2024-07-25 09:22:49.374417] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00002707 cdw11:00000000 00:07:36.796 [2024-07-25 09:22:49.374440] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:36.796 [2024-07-25 09:22:49.374487] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00008ee2 cdw11:00000000 00:07:36.796 [2024-07-25 09:22:49.374498] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:36.796 [2024-07-25 09:22:49.374546] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00008f46 cdw11:00000000 00:07:36.796 [2024-07-25 09:22:49.374557] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:36.796 [2024-07-25 09:22:49.374604] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000ac00 cdw11:00000000 00:07:36.796 [2024-07-25 09:22:49.374614] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:36.796 [2024-07-25 09:22:49.374661] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:0000012f cdw11:00000000 00:07:36.796 [2024-07-25 09:22:49.374672] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:36.796 #35 NEW cov: 12174 ft: 14709 corp: 33/249b lim: 10 exec/s: 35 rss: 73Mb L: 10/10 MS: 1 InsertByte- 00:07:36.796 [2024-07-25 09:22:49.424171] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00009341 cdw11:00000000 00:07:36.796 [2024-07-25 09:22:49.424193] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:36.796 [2024-07-25 09:22:49.424243] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000c500 cdw11:00000000 00:07:36.796 [2024-07-25 09:22:49.424254] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:36.796 #36 NEW cov: 12174 ft: 14751 corp: 34/253b lim: 10 exec/s: 36 rss: 73Mb L: 4/10 MS: 1 EraseBytes- 00:07:36.796 [2024-07-25 09:22:49.474419] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00004ee2 cdw11:00000000 00:07:36.796 [2024-07-25 09:22:49.474443] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:36.796 [2024-07-25 09:22:49.474492] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00008f00 cdw11:00000000 00:07:36.797 [2024-07-25 09:22:49.474504] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:36.797 [2024-07-25 09:22:49.474553] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000a8e cdw11:00000000 00:07:36.797 [2024-07-25 09:22:49.474564] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:36.797 #37 NEW cov: 12174 ft: 14766 corp: 35/259b lim: 10 exec/s: 37 rss: 73Mb L: 6/10 MS: 1 EraseBytes- 00:07:36.797 [2024-07-25 09:22:49.514681] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00003128 cdw11:00000000 00:07:36.797 [2024-07-25 09:22:49.514705] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:36.797 [2024-07-25 09:22:49.514754] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00008e27 cdw11:00000000 00:07:36.797 [2024-07-25 09:22:49.514764] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:36.797 [2024-07-25 09:22:49.514814] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000e28f cdw11:00000000 00:07:36.797 [2024-07-25 09:22:49.514825] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:36.797 [2024-07-25 09:22:49.514873] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00004e00 cdw11:00000000 00:07:36.797 [2024-07-25 09:22:49.514883] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:36.797 #38 NEW cov: 12174 ft: 14789 corp: 36/268b lim: 10 exec/s: 38 rss: 73Mb L: 9/10 MS: 1 InsertByte- 00:07:36.797 [2024-07-25 09:22:49.554757] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000278e cdw11:00000000 00:07:36.797 [2024-07-25 09:22:49.554780] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:36.797 [2024-07-25 09:22:49.554831] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000e28f cdw11:00000000 00:07:36.797 [2024-07-25 09:22:49.554841] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:36.797 [2024-07-25 09:22:49.554892] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00004e26 cdw11:00000000 00:07:36.797 [2024-07-25 09:22:49.554903] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:36.797 [2024-07-25 09:22:49.554956] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000041 cdw11:00000000 00:07:36.797 [2024-07-25 09:22:49.554967] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:36.797 #39 NEW cov: 12174 ft: 14803 corp: 37/277b lim: 10 exec/s: 39 rss: 74Mb L: 9/10 MS: 1 InsertByte- 00:07:37.055 [2024-07-25 09:22:49.605049] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000278e cdw11:00000000 00:07:37.055 [2024-07-25 09:22:49.605076] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.055 [2024-07-25 09:22:49.605126] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000e227 cdw11:00000000 00:07:37.055 [2024-07-25 09:22:49.605138] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:37.055 [2024-07-25 09:22:49.605186] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00008f4e cdw11:00000000 00:07:37.055 [2024-07-25 09:22:49.605197] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:37.055 [2024-07-25 09:22:49.605246] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00002600 cdw11:00000000 00:07:37.055 [2024-07-25 09:22:49.605256] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:37.055 [2024-07-25 09:22:49.605307] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:0000410a cdw11:00000000 00:07:37.055 [2024-07-25 09:22:49.605318] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:37.055 #40 NEW cov: 12174 ft: 14830 corp: 38/287b lim: 10 exec/s: 40 rss: 74Mb L: 10/10 MS: 1 CopyPart- 00:07:37.055 [2024-07-25 09:22:49.655094] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000eeee cdw11:00000000 00:07:37.055 [2024-07-25 09:22:49.655117] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.055 [2024-07-25 09:22:49.655166] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ee09 cdw11:00000000 00:07:37.055 [2024-07-25 09:22:49.655177] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:37.055 [2024-07-25 09:22:49.655222] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000eeee cdw11:00000000 00:07:37.055 [2024-07-25 09:22:49.655234] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:37.055 [2024-07-25 09:22:49.655284] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000eeee cdw11:00000000 00:07:37.056 [2024-07-25 09:22:49.655294] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:37.056 #41 NEW cov: 12174 ft: 14845 corp: 39/296b lim: 10 exec/s: 41 rss: 74Mb L: 9/10 MS: 1 ChangeBinInt- 00:07:37.056 [2024-07-25 09:22:49.704954] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ee04 cdw11:00000000 00:07:37.056 [2024-07-25 09:22:49.704978] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.056 [2024-07-25 09:22:49.705030] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000000a cdw11:00000000 00:07:37.056 [2024-07-25 09:22:49.705040] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:37.056 #42 NEW cov: 12174 ft: 14856 corp: 40/300b lim: 10 exec/s: 42 rss: 74Mb L: 4/10 MS: 1 ChangeBinInt- 00:07:37.056 [2024-07-25 09:22:49.745330] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00003100 cdw11:00000000 00:07:37.056 [2024-07-25 09:22:49.745353] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.056 [2024-07-25 09:22:49.745401] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000a27 cdw11:00000000 00:07:37.056 [2024-07-25 09:22:49.745411] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:37.056 [2024-07-25 09:22:49.745459] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000e28f cdw11:00000000 00:07:37.056 [2024-07-25 09:22:49.745470] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:37.056 [2024-07-25 09:22:49.745517] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00004e00 cdw11:00000000 00:07:37.056 [2024-07-25 09:22:49.745527] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:37.056 #43 NEW cov: 12174 ft: 14869 corp: 41/309b lim: 10 exec/s: 43 rss: 74Mb L: 9/10 MS: 1 CopyPart- 00:07:37.056 [2024-07-25 09:22:49.795629] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ee04 cdw11:00000000 00:07:37.056 [2024-07-25 09:22:49.795651] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.056 [2024-07-25 09:22:49.795698] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00009393 cdw11:00000000 00:07:37.056 [2024-07-25 09:22:49.795708] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:37.056 [2024-07-25 09:22:49.795755] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00009393 cdw11:00000000 00:07:37.056 [2024-07-25 09:22:49.795766] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:37.056 [2024-07-25 09:22:49.795811] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00009393 cdw11:00000000 00:07:37.056 [2024-07-25 09:22:49.795822] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:37.056 [2024-07-25 09:22:49.795866] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:0000000a cdw11:00000000 00:07:37.056 [2024-07-25 09:22:49.795877] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:37.056 #44 NEW cov: 12174 ft: 14873 corp: 42/319b lim: 10 exec/s: 44 rss: 74Mb L: 10/10 MS: 1 InsertRepeatedBytes- 00:07:37.056 [2024-07-25 09:22:49.845578] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00008e62 cdw11:00000000 00:07:37.056 [2024-07-25 09:22:49.845601] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.056 [2024-07-25 09:22:49.845650] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00006285 cdw11:00000000 00:07:37.056 [2024-07-25 09:22:49.845662] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:37.056 [2024-07-25 09:22:49.845709] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00004e00 cdw11:00000000 00:07:37.056 [2024-07-25 09:22:49.845721] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:37.056 [2024-07-25 09:22:49.845769] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000a51 cdw11:00000000 00:07:37.056 [2024-07-25 09:22:49.845782] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:37.314 #45 NEW cov: 12174 ft: 14879 corp: 43/327b lim: 10 exec/s: 22 rss: 74Mb L: 8/10 MS: 1 CopyPart- 00:07:37.314 #45 DONE cov: 12174 ft: 14879 corp: 43/327b lim: 10 exec/s: 22 rss: 74Mb 00:07:37.315 ###### Recommended dictionary. ###### 00:07:37.315 "'\216\342\217N\254\027\000" # Uses: 0 00:07:37.315 "\000\000\000\001" # Uses: 0 00:07:37.315 "\223A\305|O\254\027\000" # Uses: 0 00:07:37.315 " \000" # Uses: 0 00:07:37.315 ###### End of recommended dictionary. ###### 00:07:37.315 Done 45 runs in 2 second(s) 00:07:37.315 09:22:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_7.conf /var/tmp/suppress_nvmf_fuzz 00:07:37.315 09:22:49 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:37.315 09:22:49 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:37.315 09:22:49 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 8 1 0x1 00:07:37.315 09:22:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=8 00:07:37.315 09:22:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:37.315 09:22:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:37.315 09:22:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 00:07:37.315 09:22:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_8.conf 00:07:37.315 09:22:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:37.315 09:22:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:37.315 09:22:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 8 00:07:37.315 09:22:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4408 00:07:37.315 09:22:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 00:07:37.315 09:22:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4408' 00:07:37.315 09:22:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4408"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:37.315 09:22:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:37.315 09:22:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:37.315 09:22:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4408' -c /tmp/fuzz_json_8.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 -Z 8 00:07:37.315 [2024-07-25 09:22:50.031496] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:37.315 [2024-07-25 09:22:50.031575] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid418889 ] 00:07:37.315 EAL: No free 2048 kB hugepages reported on node 1 00:07:37.573 [2024-07-25 09:22:50.202345] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.573 [2024-07-25 09:22:50.266728] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.573 [2024-07-25 09:22:50.324970] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:37.573 [2024-07-25 09:22:50.341212] tcp.c:1007:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4408 *** 00:07:37.573 INFO: Running with entropic power schedule (0xFF, 100). 00:07:37.573 INFO: Seed: 1402170492 00:07:37.573 INFO: Loaded 1 modules (359061 inline 8-bit counters): 359061 [0x29c768c, 0x2a1f121), 00:07:37.573 INFO: Loaded 1 PC tables (359061 PCs): 359061 [0x2a1f128,0x2f99a78), 00:07:37.573 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 00:07:37.573 INFO: A corpus is not provided, starting from an empty corpus 00:07:37.832 [2024-07-25 09:22:50.390587] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:37.832 [2024-07-25 09:22:50.390613] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.832 #2 INITED cov: 11975 ft: 11966 corp: 1/1b exec/s: 0 rss: 69Mb 00:07:37.832 [2024-07-25 09:22:50.430564] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:37.832 [2024-07-25 09:22:50.430587] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.832 #3 NEW cov: 12088 ft: 12543 corp: 2/2b lim: 5 exec/s: 0 rss: 70Mb L: 1/1 MS: 1 ChangeBit- 00:07:37.832 [2024-07-25 09:22:50.480718] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:37.832 [2024-07-25 09:22:50.480742] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.832 #4 NEW cov: 12094 ft: 12784 corp: 3/3b lim: 5 exec/s: 0 rss: 70Mb L: 1/1 MS: 1 CrossOver- 00:07:37.832 [2024-07-25 09:22:50.530882] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:37.832 [2024-07-25 09:22:50.530907] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.832 #5 NEW cov: 12179 ft: 13064 corp: 4/4b lim: 5 exec/s: 0 rss: 70Mb L: 1/1 MS: 1 ChangeByte- 00:07:37.832 [2024-07-25 09:22:50.571438] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:37.832 [2024-07-25 09:22:50.571462] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.832 [2024-07-25 09:22:50.571516] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:37.832 [2024-07-25 09:22:50.571528] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:37.832 [2024-07-25 09:22:50.571581] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:37.832 [2024-07-25 09:22:50.571592] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:37.832 [2024-07-25 09:22:50.571645] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:37.832 [2024-07-25 09:22:50.571655] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:37.832 #6 NEW cov: 12179 ft: 13911 corp: 5/8b lim: 5 exec/s: 0 rss: 70Mb L: 4/4 MS: 1 InsertRepeatedBytes- 00:07:37.832 [2024-07-25 09:22:50.621627] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:37.832 [2024-07-25 09:22:50.621651] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.832 [2024-07-25 09:22:50.621703] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:37.832 [2024-07-25 09:22:50.621714] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:37.832 [2024-07-25 09:22:50.621767] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:37.832 [2024-07-25 09:22:50.621778] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:37.832 [2024-07-25 09:22:50.621845] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:37.832 [2024-07-25 09:22:50.621857] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:38.091 #7 NEW cov: 12179 ft: 13977 corp: 6/12b lim: 5 exec/s: 0 rss: 70Mb L: 4/4 MS: 1 InsertRepeatedBytes- 00:07:38.091 [2024-07-25 09:22:50.661818] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.091 [2024-07-25 09:22:50.661842] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:38.091 [2024-07-25 09:22:50.661896] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.091 [2024-07-25 09:22:50.661907] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:38.091 [2024-07-25 09:22:50.661961] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.091 [2024-07-25 09:22:50.661972] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:38.091 [2024-07-25 09:22:50.662028] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.091 [2024-07-25 09:22:50.662038] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:38.091 #8 NEW cov: 12179 ft: 14044 corp: 7/16b lim: 5 exec/s: 0 rss: 70Mb L: 4/4 MS: 1 CrossOver- 00:07:38.091 [2024-07-25 09:22:50.711875] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.091 [2024-07-25 09:22:50.711899] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:38.091 [2024-07-25 09:22:50.711951] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.091 [2024-07-25 09:22:50.711962] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:38.091 [2024-07-25 09:22:50.712015] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.091 [2024-07-25 09:22:50.712027] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:38.091 [2024-07-25 09:22:50.712079] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.091 [2024-07-25 09:22:50.712090] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:38.091 #9 NEW cov: 12179 ft: 14080 corp: 8/20b lim: 5 exec/s: 0 rss: 70Mb L: 4/4 MS: 1 ShuffleBytes- 00:07:38.091 [2024-07-25 09:22:50.761478] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.091 [2024-07-25 09:22:50.761502] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:38.091 #10 NEW cov: 12179 ft: 14129 corp: 9/21b lim: 5 exec/s: 0 rss: 70Mb L: 1/4 MS: 1 ChangeBit- 00:07:38.091 [2024-07-25 09:22:50.802287] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.091 [2024-07-25 09:22:50.802310] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:38.091 [2024-07-25 09:22:50.802364] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.091 [2024-07-25 09:22:50.802376] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:38.091 [2024-07-25 09:22:50.802427] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.091 [2024-07-25 09:22:50.802438] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:38.091 [2024-07-25 09:22:50.802489] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.091 [2024-07-25 09:22:50.802500] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:38.091 [2024-07-25 09:22:50.802552] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.091 [2024-07-25 09:22:50.802563] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:38.091 #11 NEW cov: 12179 ft: 14241 corp: 10/26b lim: 5 exec/s: 0 rss: 70Mb L: 5/5 MS: 1 CopyPart- 00:07:38.091 [2024-07-25 09:22:50.852248] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.091 [2024-07-25 09:22:50.852272] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:38.091 [2024-07-25 09:22:50.852325] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.091 [2024-07-25 09:22:50.852336] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:38.091 [2024-07-25 09:22:50.852389] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.091 [2024-07-25 09:22:50.852400] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:38.091 [2024-07-25 09:22:50.852455] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.091 [2024-07-25 09:22:50.852466] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:38.091 #12 NEW cov: 12179 ft: 14258 corp: 11/30b lim: 5 exec/s: 0 rss: 70Mb L: 4/5 MS: 1 ChangeByte- 00:07:38.091 [2024-07-25 09:22:50.891868] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.091 [2024-07-25 09:22:50.891890] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:38.350 #13 NEW cov: 12179 ft: 14313 corp: 12/31b lim: 5 exec/s: 0 rss: 70Mb L: 1/5 MS: 1 ChangeByte- 00:07:38.350 [2024-07-25 09:22:50.931991] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.350 [2024-07-25 09:22:50.932018] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:38.350 #14 NEW cov: 12179 ft: 14332 corp: 13/32b lim: 5 exec/s: 0 rss: 70Mb L: 1/5 MS: 1 ShuffleBytes- 00:07:38.350 [2024-07-25 09:22:50.972091] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.350 [2024-07-25 09:22:50.972115] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:38.350 #15 NEW cov: 12179 ft: 14336 corp: 14/33b lim: 5 exec/s: 0 rss: 70Mb L: 1/5 MS: 1 ChangeBit- 00:07:38.350 [2024-07-25 09:22:51.012752] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.350 [2024-07-25 09:22:51.012775] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:38.350 [2024-07-25 09:22:51.012830] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.350 [2024-07-25 09:22:51.012841] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:38.350 [2024-07-25 09:22:51.012901] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.350 [2024-07-25 09:22:51.012912] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:38.350 [2024-07-25 09:22:51.012966] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.350 [2024-07-25 09:22:51.012976] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:38.350 #16 NEW cov: 12179 ft: 14346 corp: 15/37b lim: 5 exec/s: 0 rss: 70Mb L: 4/5 MS: 1 ChangeByte- 00:07:38.350 [2024-07-25 09:22:51.052509] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.350 [2024-07-25 09:22:51.052533] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:38.350 [2024-07-25 09:22:51.052588] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.350 [2024-07-25 09:22:51.052600] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:38.350 #17 NEW cov: 12179 ft: 14527 corp: 16/39b lim: 5 exec/s: 0 rss: 70Mb L: 2/5 MS: 1 CrossOver- 00:07:38.350 [2024-07-25 09:22:51.102789] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.350 [2024-07-25 09:22:51.102812] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:38.350 [2024-07-25 09:22:51.102864] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.350 [2024-07-25 09:22:51.102875] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:38.350 [2024-07-25 09:22:51.102930] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.350 [2024-07-25 09:22:51.102941] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:38.350 #18 NEW cov: 12179 ft: 14714 corp: 17/42b lim: 5 exec/s: 0 rss: 70Mb L: 3/5 MS: 1 EraseBytes- 00:07:38.350 [2024-07-25 09:22:51.142577] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.350 [2024-07-25 09:22:51.142600] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:38.609 #19 NEW cov: 12179 ft: 14749 corp: 18/43b lim: 5 exec/s: 0 rss: 70Mb L: 1/5 MS: 1 CrossOver- 00:07:38.609 [2024-07-25 09:22:51.182728] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.609 [2024-07-25 09:22:51.182753] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:38.609 #20 NEW cov: 12179 ft: 14766 corp: 19/44b lim: 5 exec/s: 0 rss: 70Mb L: 1/5 MS: 1 ShuffleBytes- 00:07:38.609 [2024-07-25 09:22:51.223341] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.609 [2024-07-25 09:22:51.223365] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:38.609 [2024-07-25 09:22:51.223418] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.609 [2024-07-25 09:22:51.223430] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:38.609 [2024-07-25 09:22:51.223482] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.609 [2024-07-25 09:22:51.223493] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:38.609 [2024-07-25 09:22:51.223546] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.609 [2024-07-25 09:22:51.223557] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:38.609 #21 NEW cov: 12179 ft: 14781 corp: 20/48b lim: 5 exec/s: 0 rss: 70Mb L: 4/5 MS: 1 ShuffleBytes- 00:07:38.609 [2024-07-25 09:22:51.263150] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.609 [2024-07-25 09:22:51.263175] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:38.609 [2024-07-25 09:22:51.263240] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.609 [2024-07-25 09:22:51.263251] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:38.609 NEW_FUNC[1/1]: 0x1a8a050 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:07:38.609 #22 NEW cov: 12202 ft: 14868 corp: 21/50b lim: 5 exec/s: 22 rss: 72Mb L: 2/5 MS: 1 CrossOver- 00:07:38.609 [2024-07-25 09:22:51.394338] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.609 [2024-07-25 09:22:51.394381] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:38.609 [2024-07-25 09:22:51.394452] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.609 [2024-07-25 09:22:51.394469] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:38.609 [2024-07-25 09:22:51.394541] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.609 [2024-07-25 09:22:51.394557] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:38.609 [2024-07-25 09:22:51.394624] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.609 [2024-07-25 09:22:51.394640] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:38.609 [2024-07-25 09:22:51.394707] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.609 [2024-07-25 09:22:51.394724] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:38.868 #23 NEW cov: 12202 ft: 14903 corp: 22/55b lim: 5 exec/s: 23 rss: 72Mb L: 5/5 MS: 1 CrossOver- 00:07:38.868 [2024-07-25 09:22:51.434205] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.868 [2024-07-25 09:22:51.434240] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:38.868 [2024-07-25 09:22:51.434300] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.868 [2024-07-25 09:22:51.434311] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:38.868 [2024-07-25 09:22:51.434369] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.868 [2024-07-25 09:22:51.434380] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:38.868 [2024-07-25 09:22:51.434436] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.868 [2024-07-25 09:22:51.434446] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:38.868 [2024-07-25 09:22:51.434500] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.868 [2024-07-25 09:22:51.434510] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:38.868 #24 NEW cov: 12202 ft: 14952 corp: 23/60b lim: 5 exec/s: 24 rss: 72Mb L: 5/5 MS: 1 ShuffleBytes- 00:07:38.868 [2024-07-25 09:22:51.484307] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.868 [2024-07-25 09:22:51.484331] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:38.868 [2024-07-25 09:22:51.484388] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.868 [2024-07-25 09:22:51.484400] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:38.868 [2024-07-25 09:22:51.484455] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.868 [2024-07-25 09:22:51.484466] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:38.868 [2024-07-25 09:22:51.484523] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.868 [2024-07-25 09:22:51.484535] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:38.868 [2024-07-25 09:22:51.484591] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.869 [2024-07-25 09:22:51.484603] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:38.869 #25 NEW cov: 12202 ft: 14971 corp: 24/65b lim: 5 exec/s: 25 rss: 72Mb L: 5/5 MS: 1 CrossOver- 00:07:38.869 [2024-07-25 09:22:51.524424] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.869 [2024-07-25 09:22:51.524449] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:38.869 [2024-07-25 09:22:51.524506] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.869 [2024-07-25 09:22:51.524518] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:38.869 [2024-07-25 09:22:51.524573] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.869 [2024-07-25 09:22:51.524584] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:38.869 [2024-07-25 09:22:51.524639] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.869 [2024-07-25 09:22:51.524651] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:38.869 [2024-07-25 09:22:51.524703] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.869 [2024-07-25 09:22:51.524715] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:38.869 #26 NEW cov: 12202 ft: 14989 corp: 25/70b lim: 5 exec/s: 26 rss: 72Mb L: 5/5 MS: 1 CopyPart- 00:07:38.869 [2024-07-25 09:22:51.583903] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.869 [2024-07-25 09:22:51.583927] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:38.869 #27 NEW cov: 12202 ft: 15001 corp: 26/71b lim: 5 exec/s: 27 rss: 72Mb L: 1/5 MS: 1 EraseBytes- 00:07:38.869 [2024-07-25 09:22:51.634236] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.869 [2024-07-25 09:22:51.634260] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:38.869 [2024-07-25 09:22:51.634315] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.869 [2024-07-25 09:22:51.634327] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:38.869 #28 NEW cov: 12202 ft: 15044 corp: 27/73b lim: 5 exec/s: 28 rss: 72Mb L: 2/5 MS: 1 ChangeByte- 00:07:39.127 [2024-07-25 09:22:51.684713] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.128 [2024-07-25 09:22:51.684739] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:39.128 [2024-07-25 09:22:51.684796] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.128 [2024-07-25 09:22:51.684807] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:39.128 [2024-07-25 09:22:51.684861] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.128 [2024-07-25 09:22:51.684873] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:39.128 [2024-07-25 09:22:51.684930] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.128 [2024-07-25 09:22:51.684941] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:39.128 #29 NEW cov: 12202 ft: 15065 corp: 28/77b lim: 5 exec/s: 29 rss: 72Mb L: 4/5 MS: 1 ChangeBit- 00:07:39.128 [2024-07-25 09:22:51.725005] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.128 [2024-07-25 09:22:51.725028] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:39.128 [2024-07-25 09:22:51.725086] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.128 [2024-07-25 09:22:51.725098] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:39.128 [2024-07-25 09:22:51.725154] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.128 [2024-07-25 09:22:51.725166] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:39.128 [2024-07-25 09:22:51.725222] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.128 [2024-07-25 09:22:51.725233] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:39.128 [2024-07-25 09:22:51.725289] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.128 [2024-07-25 09:22:51.725301] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:39.128 #30 NEW cov: 12202 ft: 15083 corp: 29/82b lim: 5 exec/s: 30 rss: 72Mb L: 5/5 MS: 1 InsertRepeatedBytes- 00:07:39.128 [2024-07-25 09:22:51.764931] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.128 [2024-07-25 09:22:51.764954] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:39.128 [2024-07-25 09:22:51.765009] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.128 [2024-07-25 09:22:51.765020] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:39.128 [2024-07-25 09:22:51.765078] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.128 [2024-07-25 09:22:51.765094] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:39.128 [2024-07-25 09:22:51.765149] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.128 [2024-07-25 09:22:51.765160] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:39.128 #31 NEW cov: 12202 ft: 15108 corp: 30/86b lim: 5 exec/s: 31 rss: 72Mb L: 4/5 MS: 1 CopyPart- 00:07:39.128 [2024-07-25 09:22:51.804510] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.128 [2024-07-25 09:22:51.804533] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:39.128 #32 NEW cov: 12202 ft: 15175 corp: 31/87b lim: 5 exec/s: 32 rss: 72Mb L: 1/5 MS: 1 ChangeByte- 00:07:39.128 [2024-07-25 09:22:51.844834] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.128 [2024-07-25 09:22:51.844857] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:39.128 [2024-07-25 09:22:51.844913] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.128 [2024-07-25 09:22:51.844924] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:39.128 #33 NEW cov: 12202 ft: 15213 corp: 32/89b lim: 5 exec/s: 33 rss: 72Mb L: 2/5 MS: 1 CopyPart- 00:07:39.128 [2024-07-25 09:22:51.895148] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.128 [2024-07-25 09:22:51.895171] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:39.128 [2024-07-25 09:22:51.895227] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.128 [2024-07-25 09:22:51.895239] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:39.128 [2024-07-25 09:22:51.895297] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.128 [2024-07-25 09:22:51.895309] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:39.128 #34 NEW cov: 12202 ft: 15222 corp: 33/92b lim: 5 exec/s: 34 rss: 72Mb L: 3/5 MS: 1 EraseBytes- 00:07:39.387 [2024-07-25 09:22:51.945688] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.387 [2024-07-25 09:22:51.945712] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:39.387 [2024-07-25 09:22:51.945769] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.387 [2024-07-25 09:22:51.945781] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:39.387 [2024-07-25 09:22:51.945838] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.387 [2024-07-25 09:22:51.945850] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:39.387 [2024-07-25 09:22:51.945909] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.387 [2024-07-25 09:22:51.945920] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:39.387 [2024-07-25 09:22:51.945976] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.387 [2024-07-25 09:22:51.945987] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:39.387 #35 NEW cov: 12202 ft: 15227 corp: 34/97b lim: 5 exec/s: 35 rss: 72Mb L: 5/5 MS: 1 ChangeByte- 00:07:39.387 [2024-07-25 09:22:51.985602] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.387 [2024-07-25 09:22:51.985625] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:39.387 [2024-07-25 09:22:51.985684] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.387 [2024-07-25 09:22:51.985696] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:39.387 [2024-07-25 09:22:51.985754] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.387 [2024-07-25 09:22:51.985765] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:39.387 [2024-07-25 09:22:51.985821] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.387 [2024-07-25 09:22:51.985832] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:39.387 #36 NEW cov: 12202 ft: 15235 corp: 35/101b lim: 5 exec/s: 36 rss: 72Mb L: 4/5 MS: 1 EraseBytes- 00:07:39.387 [2024-07-25 09:22:52.035174] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.387 [2024-07-25 09:22:52.035197] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:39.388 #37 NEW cov: 12202 ft: 15240 corp: 36/102b lim: 5 exec/s: 37 rss: 72Mb L: 1/5 MS: 1 CopyPart- 00:07:39.388 [2024-07-25 09:22:52.075355] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.388 [2024-07-25 09:22:52.075378] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:39.388 #38 NEW cov: 12202 ft: 15272 corp: 37/103b lim: 5 exec/s: 38 rss: 72Mb L: 1/5 MS: 1 ChangeBinInt- 00:07:39.388 [2024-07-25 09:22:52.126133] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.388 [2024-07-25 09:22:52.126157] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:39.388 [2024-07-25 09:22:52.126214] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.388 [2024-07-25 09:22:52.126226] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:39.388 [2024-07-25 09:22:52.126282] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.388 [2024-07-25 09:22:52.126296] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:39.388 [2024-07-25 09:22:52.126353] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.388 [2024-07-25 09:22:52.126365] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:39.388 [2024-07-25 09:22:52.126421] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.388 [2024-07-25 09:22:52.126432] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:39.388 #39 NEW cov: 12202 ft: 15286 corp: 38/108b lim: 5 exec/s: 39 rss: 72Mb L: 5/5 MS: 1 CrossOver- 00:07:39.388 [2024-07-25 09:22:52.176128] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.388 [2024-07-25 09:22:52.176152] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:39.388 [2024-07-25 09:22:52.176210] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.388 [2024-07-25 09:22:52.176222] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:39.388 [2024-07-25 09:22:52.176280] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.388 [2024-07-25 09:22:52.176292] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:39.388 [2024-07-25 09:22:52.176347] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.388 [2024-07-25 09:22:52.176358] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:39.647 #40 NEW cov: 12202 ft: 15289 corp: 39/112b lim: 5 exec/s: 40 rss: 72Mb L: 4/5 MS: 1 CrossOver- 00:07:39.647 [2024-07-25 09:22:52.216422] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.647 [2024-07-25 09:22:52.216446] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:39.647 [2024-07-25 09:22:52.216502] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.647 [2024-07-25 09:22:52.216513] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:39.647 [2024-07-25 09:22:52.216567] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.647 [2024-07-25 09:22:52.216595] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:39.647 [2024-07-25 09:22:52.216651] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.647 [2024-07-25 09:22:52.216661] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:39.647 [2024-07-25 09:22:52.216716] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.647 [2024-07-25 09:22:52.216730] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:39.647 [2024-07-25 09:22:52.256548] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.647 [2024-07-25 09:22:52.256571] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:39.647 [2024-07-25 09:22:52.256628] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.647 [2024-07-25 09:22:52.256639] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:39.647 [2024-07-25 09:22:52.256695] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.647 [2024-07-25 09:22:52.256706] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:39.647 [2024-07-25 09:22:52.256763] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.647 [2024-07-25 09:22:52.256775] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:39.647 [2024-07-25 09:22:52.256832] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.647 [2024-07-25 09:22:52.256844] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:39.647 #42 NEW cov: 12202 ft: 15290 corp: 40/117b lim: 5 exec/s: 42 rss: 72Mb L: 5/5 MS: 2 ChangeByte-CrossOver- 00:07:39.647 [2024-07-25 09:22:52.296637] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.647 [2024-07-25 09:22:52.296660] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:39.647 [2024-07-25 09:22:52.296717] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.647 [2024-07-25 09:22:52.296728] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:39.647 [2024-07-25 09:22:52.296785] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.647 [2024-07-25 09:22:52.296798] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:39.647 [2024-07-25 09:22:52.296854] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.647 [2024-07-25 09:22:52.296867] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:39.647 [2024-07-25 09:22:52.296924] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:0000000e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.647 [2024-07-25 09:22:52.296936] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:39.647 #43 NEW cov: 12202 ft: 15298 corp: 41/122b lim: 5 exec/s: 43 rss: 72Mb L: 5/5 MS: 1 InsertByte- 00:07:39.647 [2024-07-25 09:22:52.346149] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.647 [2024-07-25 09:22:52.346175] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:39.647 #44 NEW cov: 12202 ft: 15312 corp: 42/123b lim: 5 exec/s: 22 rss: 72Mb L: 1/5 MS: 1 ChangeBit- 00:07:39.647 #44 DONE cov: 12202 ft: 15312 corp: 42/123b lim: 5 exec/s: 22 rss: 72Mb 00:07:39.647 Done 44 runs in 2 second(s) 00:07:39.906 09:22:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_8.conf /var/tmp/suppress_nvmf_fuzz 00:07:39.906 09:22:52 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:39.906 09:22:52 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:39.906 09:22:52 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 9 1 0x1 00:07:39.906 09:22:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=9 00:07:39.906 09:22:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:39.906 09:22:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:39.907 09:22:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 00:07:39.907 09:22:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_9.conf 00:07:39.907 09:22:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:39.907 09:22:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:39.907 09:22:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 9 00:07:39.907 09:22:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4409 00:07:39.907 09:22:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 00:07:39.907 09:22:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4409' 00:07:39.907 09:22:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4409"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:39.907 09:22:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:39.907 09:22:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:39.907 09:22:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4409' -c /tmp/fuzz_json_9.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 -Z 9 00:07:39.907 [2024-07-25 09:22:52.528490] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:39.907 [2024-07-25 09:22:52.528570] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid419241 ] 00:07:39.907 EAL: No free 2048 kB hugepages reported on node 1 00:07:39.907 [2024-07-25 09:22:52.697809] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.165 [2024-07-25 09:22:52.764219] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.165 [2024-07-25 09:22:52.822727] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:40.165 [2024-07-25 09:22:52.838963] tcp.c:1007:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4409 *** 00:07:40.165 INFO: Running with entropic power schedule (0xFF, 100). 00:07:40.165 INFO: Seed: 3899157173 00:07:40.165 INFO: Loaded 1 modules (359061 inline 8-bit counters): 359061 [0x29c768c, 0x2a1f121), 00:07:40.165 INFO: Loaded 1 PC tables (359061 PCs): 359061 [0x2a1f128,0x2f99a78), 00:07:40.165 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 00:07:40.165 INFO: A corpus is not provided, starting from an empty corpus 00:07:40.165 [2024-07-25 09:22:52.876858] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.165 [2024-07-25 09:22:52.876894] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.165 #2 INITED cov: 11975 ft: 11966 corp: 1/1b exec/s: 0 rss: 70Mb 00:07:40.165 [2024-07-25 09:22:52.926816] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.165 [2024-07-25 09:22:52.926843] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.423 #3 NEW cov: 12088 ft: 12598 corp: 2/2b lim: 5 exec/s: 0 rss: 70Mb L: 1/1 MS: 1 ChangeByte- 00:07:40.423 [2024-07-25 09:22:53.007034] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.423 [2024-07-25 09:22:53.007061] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.423 #4 NEW cov: 12094 ft: 12782 corp: 3/3b lim: 5 exec/s: 0 rss: 70Mb L: 1/1 MS: 1 ChangeBinInt- 00:07:40.423 [2024-07-25 09:22:53.057183] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.423 [2024-07-25 09:22:53.057212] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.423 #5 NEW cov: 12179 ft: 13071 corp: 4/4b lim: 5 exec/s: 0 rss: 70Mb L: 1/1 MS: 1 ChangeByte- 00:07:40.423 [2024-07-25 09:22:53.137405] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.423 [2024-07-25 09:22:53.137433] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.423 #6 NEW cov: 12179 ft: 13273 corp: 5/5b lim: 5 exec/s: 0 rss: 70Mb L: 1/1 MS: 1 ChangeBit- 00:07:40.423 [2024-07-25 09:22:53.187606] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.423 [2024-07-25 09:22:53.187635] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.423 [2024-07-25 09:22:53.187664] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.423 [2024-07-25 09:22:53.187676] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:40.423 #7 NEW cov: 12179 ft: 13982 corp: 6/7b lim: 5 exec/s: 0 rss: 70Mb L: 2/2 MS: 1 InsertByte- 00:07:40.681 [2024-07-25 09:22:53.247736] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.681 [2024-07-25 09:22:53.247764] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.681 [2024-07-25 09:22:53.247792] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.681 [2024-07-25 09:22:53.247805] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:40.681 #8 NEW cov: 12179 ft: 14036 corp: 7/9b lim: 5 exec/s: 0 rss: 71Mb L: 2/2 MS: 1 CopyPart- 00:07:40.681 [2024-07-25 09:22:53.338052] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.681 [2024-07-25 09:22:53.338084] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.681 [2024-07-25 09:22:53.338130] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.681 [2024-07-25 09:22:53.338148] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:40.681 [2024-07-25 09:22:53.338174] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.681 [2024-07-25 09:22:53.338187] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:40.681 [2024-07-25 09:22:53.338213] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.681 [2024-07-25 09:22:53.338226] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:40.681 #9 NEW cov: 12179 ft: 14372 corp: 8/13b lim: 5 exec/s: 0 rss: 71Mb L: 4/4 MS: 1 InsertRepeatedBytes- 00:07:40.681 [2024-07-25 09:22:53.418338] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.681 [2024-07-25 09:22:53.418365] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.681 [2024-07-25 09:22:53.418394] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.681 [2024-07-25 09:22:53.418406] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:40.681 [2024-07-25 09:22:53.418431] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:0000000a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.681 [2024-07-25 09:22:53.418443] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:40.681 [2024-07-25 09:22:53.418467] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.681 [2024-07-25 09:22:53.418479] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:40.681 [2024-07-25 09:22:53.418520] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.681 [2024-07-25 09:22:53.418533] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:40.681 #10 NEW cov: 12179 ft: 14556 corp: 9/18b lim: 5 exec/s: 0 rss: 71Mb L: 5/5 MS: 1 InsertByte- 00:07:40.940 [2024-07-25 09:22:53.508358] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.940 [2024-07-25 09:22:53.508386] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.940 #11 NEW cov: 12179 ft: 14644 corp: 10/19b lim: 5 exec/s: 0 rss: 71Mb L: 1/5 MS: 1 EraseBytes- 00:07:40.940 [2024-07-25 09:22:53.588630] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.940 [2024-07-25 09:22:53.588658] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.940 [2024-07-25 09:22:53.588687] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.940 [2024-07-25 09:22:53.588700] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:40.940 #12 NEW cov: 12179 ft: 14672 corp: 11/21b lim: 5 exec/s: 0 rss: 71Mb L: 2/5 MS: 1 ShuffleBytes- 00:07:40.940 [2024-07-25 09:22:53.648718] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.940 [2024-07-25 09:22:53.648745] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.940 #13 NEW cov: 12179 ft: 14698 corp: 12/22b lim: 5 exec/s: 0 rss: 71Mb L: 1/5 MS: 1 CopyPart- 00:07:40.940 [2024-07-25 09:22:53.728976] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.940 [2024-07-25 09:22:53.729003] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.940 [2024-07-25 09:22:53.729032] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.940 [2024-07-25 09:22:53.729044] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:41.198 NEW_FUNC[1/1]: 0x1a8a050 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:07:41.198 #14 NEW cov: 12202 ft: 14782 corp: 13/24b lim: 5 exec/s: 14 rss: 72Mb L: 2/5 MS: 1 ShuffleBytes- 00:07:41.198 [2024-07-25 09:22:53.899493] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.198 [2024-07-25 09:22:53.899535] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:41.198 #15 NEW cov: 12202 ft: 14871 corp: 14/25b lim: 5 exec/s: 15 rss: 72Mb L: 1/5 MS: 1 ChangeByte- 00:07:41.198 [2024-07-25 09:22:53.959580] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.198 [2024-07-25 09:22:53.959610] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:41.198 #16 NEW cov: 12202 ft: 14923 corp: 15/26b lim: 5 exec/s: 16 rss: 72Mb L: 1/5 MS: 1 CopyPart- 00:07:41.457 [2024-07-25 09:22:54.019821] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.457 [2024-07-25 09:22:54.019849] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:41.457 [2024-07-25 09:22:54.019879] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.457 [2024-07-25 09:22:54.019893] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:41.457 [2024-07-25 09:22:54.019918] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:0000000a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.457 [2024-07-25 09:22:54.019931] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:41.457 #17 NEW cov: 12202 ft: 15124 corp: 16/29b lim: 5 exec/s: 17 rss: 72Mb L: 3/5 MS: 1 CMP- DE: "\001\014"- 00:07:41.457 [2024-07-25 09:22:54.109918] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.457 [2024-07-25 09:22:54.109945] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:41.457 #18 NEW cov: 12202 ft: 15131 corp: 17/30b lim: 5 exec/s: 18 rss: 72Mb L: 1/5 MS: 1 ChangeBit- 00:07:41.457 [2024-07-25 09:22:54.190173] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.457 [2024-07-25 09:22:54.190205] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:41.457 #19 NEW cov: 12202 ft: 15160 corp: 18/31b lim: 5 exec/s: 19 rss: 72Mb L: 1/5 MS: 1 ChangeBit- 00:07:41.457 [2024-07-25 09:22:54.240328] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.457 [2024-07-25 09:22:54.240355] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:41.457 [2024-07-25 09:22:54.240383] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.457 [2024-07-25 09:22:54.240396] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:41.715 #20 NEW cov: 12202 ft: 15180 corp: 19/33b lim: 5 exec/s: 20 rss: 72Mb L: 2/5 MS: 1 ChangeBit- 00:07:41.715 [2024-07-25 09:22:54.300512] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.715 [2024-07-25 09:22:54.300538] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:41.715 [2024-07-25 09:22:54.300568] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.715 [2024-07-25 09:22:54.300581] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:41.715 [2024-07-25 09:22:54.380649] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.716 [2024-07-25 09:22:54.380675] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:41.716 #22 NEW cov: 12202 ft: 15200 corp: 20/34b lim: 5 exec/s: 22 rss: 72Mb L: 1/5 MS: 2 CopyPart-EraseBytes- 00:07:41.716 [2024-07-25 09:22:54.440857] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.716 [2024-07-25 09:22:54.440882] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:41.716 [2024-07-25 09:22:54.440911] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.716 [2024-07-25 09:22:54.440924] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:41.716 #23 NEW cov: 12202 ft: 15230 corp: 21/36b lim: 5 exec/s: 23 rss: 73Mb L: 2/5 MS: 1 PersAutoDict- DE: "\001\014"- 00:07:41.716 [2024-07-25 09:22:54.521043] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.716 [2024-07-25 09:22:54.521076] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:41.975 #24 NEW cov: 12202 ft: 15239 corp: 22/37b lim: 5 exec/s: 24 rss: 73Mb L: 1/5 MS: 1 CopyPart- 00:07:41.975 [2024-07-25 09:22:54.581301] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.975 [2024-07-25 09:22:54.581327] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:41.975 [2024-07-25 09:22:54.581357] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.975 [2024-07-25 09:22:54.581370] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:41.975 [2024-07-25 09:22:54.581400] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:0000000a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.975 [2024-07-25 09:22:54.581412] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:41.975 #25 NEW cov: 12202 ft: 15244 corp: 23/40b lim: 5 exec/s: 25 rss: 73Mb L: 3/5 MS: 1 InsertByte- 00:07:41.975 [2024-07-25 09:22:54.641401] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.975 [2024-07-25 09:22:54.641428] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:41.975 [2024-07-25 09:22:54.641457] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.975 [2024-07-25 09:22:54.641470] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:41.975 #26 NEW cov: 12202 ft: 15300 corp: 24/42b lim: 5 exec/s: 26 rss: 73Mb L: 2/5 MS: 1 ChangeBit- 00:07:41.975 [2024-07-25 09:22:54.731640] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.975 [2024-07-25 09:22:54.731667] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:41.975 [2024-07-25 09:22:54.731696] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.975 [2024-07-25 09:22:54.731709] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:42.234 #27 NEW cov: 12202 ft: 15335 corp: 25/44b lim: 5 exec/s: 27 rss: 73Mb L: 2/5 MS: 1 InsertByte- 00:07:42.234 [2024-07-25 09:22:54.811841] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.234 [2024-07-25 09:22:54.811867] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:42.234 [2024-07-25 09:22:54.811896] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.234 [2024-07-25 09:22:54.811910] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:42.234 #28 NEW cov: 12202 ft: 15346 corp: 26/46b lim: 5 exec/s: 14 rss: 73Mb L: 2/5 MS: 1 ChangeByte- 00:07:42.234 #28 DONE cov: 12202 ft: 15346 corp: 26/46b lim: 5 exec/s: 14 rss: 73Mb 00:07:42.234 ###### Recommended dictionary. ###### 00:07:42.234 "\001\014" # Uses: 1 00:07:42.234 ###### End of recommended dictionary. ###### 00:07:42.234 Done 28 runs in 2 second(s) 00:07:42.234 09:22:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_9.conf /var/tmp/suppress_nvmf_fuzz 00:07:42.234 09:22:54 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:42.234 09:22:54 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:42.234 09:22:54 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 10 1 0x1 00:07:42.234 09:22:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=10 00:07:42.234 09:22:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:42.234 09:22:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:42.234 09:22:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 00:07:42.234 09:22:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_10.conf 00:07:42.234 09:22:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:42.234 09:22:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:42.234 09:22:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 10 00:07:42.234 09:22:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4410 00:07:42.234 09:22:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 00:07:42.234 09:22:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4410' 00:07:42.234 09:22:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4410"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:42.234 09:22:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:42.234 09:22:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:42.234 09:22:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4410' -c /tmp/fuzz_json_10.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 -Z 10 00:07:42.234 [2024-07-25 09:22:55.029815] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:42.234 [2024-07-25 09:22:55.029894] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid419673 ] 00:07:42.493 EAL: No free 2048 kB hugepages reported on node 1 00:07:42.493 [2024-07-25 09:22:55.195892] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:42.493 [2024-07-25 09:22:55.260752] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.751 [2024-07-25 09:22:55.319280] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:42.751 [2024-07-25 09:22:55.335502] tcp.c:1007:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4410 *** 00:07:42.751 INFO: Running with entropic power schedule (0xFF, 100). 00:07:42.751 INFO: Seed: 2102188915 00:07:42.751 INFO: Loaded 1 modules (359061 inline 8-bit counters): 359061 [0x29c768c, 0x2a1f121), 00:07:42.751 INFO: Loaded 1 PC tables (359061 PCs): 359061 [0x2a1f128,0x2f99a78), 00:07:42.751 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 00:07:42.751 INFO: A corpus is not provided, starting from an empty corpus 00:07:42.751 #2 INITED exec/s: 0 rss: 63Mb 00:07:42.751 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:42.751 This may also happen if the target rejected all inputs we tried so far 00:07:42.751 [2024-07-25 09:22:55.402786] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:f1f1f1f1 cdw11:f1f1f1f1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.751 [2024-07-25 09:22:55.402819] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:42.751 [2024-07-25 09:22:55.402913] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:f1f1f1f1 cdw11:f1f1f10a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.751 [2024-07-25 09:22:55.402928] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:42.751 NEW_FUNC[1/700]: 0x490cf0 in fuzz_admin_security_receive_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:205 00:07:42.751 NEW_FUNC[2/700]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:42.751 #13 NEW cov: 11993 ft: 11993 corp: 2/17b lim: 40 exec/s: 0 rss: 71Mb L: 16/16 MS: 1 InsertRepeatedBytes- 00:07:43.010 [2024-07-25 09:22:55.573704] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:f1f1f1f1 cdw11:f1f1f1f1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.010 [2024-07-25 09:22:55.573757] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.010 [2024-07-25 09:22:55.573873] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:b1f1f1f1 cdw11:f1f1f1f1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.010 [2024-07-25 09:22:55.573894] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:43.010 #24 NEW cov: 12111 ft: 12575 corp: 3/34b lim: 40 exec/s: 0 rss: 71Mb L: 17/17 MS: 1 InsertByte- 00:07:43.010 [2024-07-25 09:22:55.643737] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:f1f1f1f1 cdw11:f1f1f1f1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.010 [2024-07-25 09:22:55.643762] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.010 [2024-07-25 09:22:55.643856] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:b1f1f10a cdw11:f1f1f1f1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.010 [2024-07-25 09:22:55.643868] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:43.010 #25 NEW cov: 12117 ft: 12810 corp: 4/52b lim: 40 exec/s: 0 rss: 71Mb L: 18/18 MS: 1 CrossOver- 00:07:43.010 [2024-07-25 09:22:55.703947] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:f1f1f1f1 cdw11:0f0e0e0e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.010 [2024-07-25 09:22:55.703971] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.010 [2024-07-25 09:22:55.704052] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:4e0e0ef1 cdw11:f1f1f1f1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.010 [2024-07-25 09:22:55.704064] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:43.010 #26 NEW cov: 12202 ft: 13026 corp: 5/70b lim: 40 exec/s: 0 rss: 71Mb L: 18/18 MS: 1 ChangeBinInt- 00:07:43.010 [2024-07-25 09:22:55.764099] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:f1f1f1f1 cdw11:f1b1f1f1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.010 [2024-07-25 09:22:55.764122] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.010 [2024-07-25 09:22:55.764212] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:f1f1f10a cdw11:f1f1f1f1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.010 [2024-07-25 09:22:55.764225] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:43.010 #27 NEW cov: 12202 ft: 13104 corp: 6/88b lim: 40 exec/s: 0 rss: 72Mb L: 18/18 MS: 1 ShuffleBytes- 00:07:43.010 [2024-07-25 09:22:55.814251] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:dcf1f1f1 cdw11:f1f1b1f1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.010 [2024-07-25 09:22:55.814274] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.010 [2024-07-25 09:22:55.814366] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:f1f1f1f1 cdw11:0af1f1f1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.010 [2024-07-25 09:22:55.814378] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:43.268 #28 NEW cov: 12202 ft: 13149 corp: 7/107b lim: 40 exec/s: 0 rss: 72Mb L: 19/19 MS: 1 InsertByte- 00:07:43.268 [2024-07-25 09:22:55.874526] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:f1f139f1 cdw11:f1f1f1f1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.268 [2024-07-25 09:22:55.874549] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.268 [2024-07-25 09:22:55.874649] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:f1b1f1f1 cdw11:0af1f1f1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.268 [2024-07-25 09:22:55.874661] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:43.268 #29 NEW cov: 12202 ft: 13201 corp: 8/126b lim: 40 exec/s: 0 rss: 72Mb L: 19/19 MS: 1 InsertByte- 00:07:43.268 [2024-07-25 09:22:55.924938] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:f1f1f1f1 cdw11:0f0e0e0e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.268 [2024-07-25 09:22:55.924962] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.268 [2024-07-25 09:22:55.925056] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:4e0e0ef1 cdw11:f1f10e4e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.268 [2024-07-25 09:22:55.925074] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:43.268 #30 NEW cov: 12202 ft: 13244 corp: 9/144b lim: 40 exec/s: 0 rss: 72Mb L: 18/19 MS: 1 CopyPart- 00:07:43.269 [2024-07-25 09:22:55.994965] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:f1f1f1f1 cdw11:f1f1f1f1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.269 [2024-07-25 09:22:55.994990] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.269 #31 NEW cov: 12202 ft: 13666 corp: 10/156b lim: 40 exec/s: 0 rss: 72Mb L: 12/19 MS: 1 EraseBytes- 00:07:43.269 [2024-07-25 09:22:56.045556] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:f123f1f1 cdw11:f1f1b1f1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.269 [2024-07-25 09:22:56.045582] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.269 [2024-07-25 09:22:56.045681] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:f1f1f1f1 cdw11:0af1f1f1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.269 [2024-07-25 09:22:56.045693] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:43.269 #32 NEW cov: 12202 ft: 13708 corp: 11/175b lim: 40 exec/s: 0 rss: 72Mb L: 19/19 MS: 1 InsertByte- 00:07:43.527 [2024-07-25 09:22:56.096028] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:f1f1d1f1 cdw11:0f0e0e0e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.527 [2024-07-25 09:22:56.096052] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.527 [2024-07-25 09:22:56.096150] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:4e0e0ef1 cdw11:f1f10e4e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.527 [2024-07-25 09:22:56.096164] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:43.527 #33 NEW cov: 12202 ft: 13723 corp: 12/193b lim: 40 exec/s: 0 rss: 72Mb L: 18/19 MS: 1 ChangeBit- 00:07:43.527 [2024-07-25 09:22:56.156277] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0bf8f1ff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.527 [2024-07-25 09:22:56.156300] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.527 [2024-07-25 09:22:56.156397] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.527 [2024-07-25 09:22:56.156412] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:43.527 #38 NEW cov: 12202 ft: 13738 corp: 13/215b lim: 40 exec/s: 0 rss: 72Mb L: 22/22 MS: 5 ChangeBit-CrossOver-ShuffleBytes-ChangeBinInt-InsertRepeatedBytes- 00:07:43.527 [2024-07-25 09:22:56.206403] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:f1f139f1 cdw11:f1f1f1f1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.527 [2024-07-25 09:22:56.206426] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.527 [2024-07-25 09:22:56.206514] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:f1b1f1f1 cdw11:0af1f1f1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.527 [2024-07-25 09:22:56.206528] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:43.527 #39 NEW cov: 12202 ft: 13805 corp: 14/235b lim: 40 exec/s: 0 rss: 72Mb L: 20/22 MS: 1 InsertByte- 00:07:43.527 [2024-07-25 09:22:56.276477] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:f1f1f1f1 cdw11:f1b1f1f1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.527 [2024-07-25 09:22:56.276500] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.527 NEW_FUNC[1/1]: 0x1a8a050 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:07:43.527 #40 NEW cov: 12225 ft: 13866 corp: 15/250b lim: 40 exec/s: 0 rss: 72Mb L: 15/22 MS: 1 EraseBytes- 00:07:43.527 [2024-07-25 09:22:56.327178] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:f1f1f1f1 cdw11:e1f1f1f1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.527 [2024-07-25 09:22:56.327201] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.527 [2024-07-25 09:22:56.327295] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:b1f1f10a cdw11:f1f1f1f1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.527 [2024-07-25 09:22:56.327308] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:43.786 #41 NEW cov: 12225 ft: 13887 corp: 16/268b lim: 40 exec/s: 0 rss: 72Mb L: 18/22 MS: 1 ChangeBit- 00:07:43.786 [2024-07-25 09:22:56.377939] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:f1f1ffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.786 [2024-07-25 09:22:56.377963] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.786 [2024-07-25 09:22:56.378047] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffd1f10f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.786 [2024-07-25 09:22:56.378060] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:43.786 [2024-07-25 09:22:56.378150] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:0e0e0e4e cdw11:0e0ef1f1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.786 [2024-07-25 09:22:56.378164] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:43.786 #42 NEW cov: 12225 ft: 14175 corp: 17/297b lim: 40 exec/s: 42 rss: 72Mb L: 29/29 MS: 1 InsertRepeatedBytes- 00:07:43.786 [2024-07-25 09:22:56.437890] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:f1f1f1f1 cdw11:f1f1f1f1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.786 [2024-07-25 09:22:56.437912] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.786 [2024-07-25 09:22:56.437997] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:f1b10af1 cdw11:f1f1f1f1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.786 [2024-07-25 09:22:56.438022] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:43.786 #43 NEW cov: 12225 ft: 14204 corp: 18/315b lim: 40 exec/s: 43 rss: 72Mb L: 18/29 MS: 1 ShuffleBytes- 00:07:43.786 [2024-07-25 09:22:56.488578] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:f1f1f1f1 cdw11:f1b10af1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.786 [2024-07-25 09:22:56.488601] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.786 [2024-07-25 09:22:56.488694] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:f1f1f1f1 cdw11:f10af1f1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.786 [2024-07-25 09:22:56.488707] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:43.786 [2024-07-25 09:22:56.488797] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:f1f1f1f1 cdw11:f1b10af1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.786 [2024-07-25 09:22:56.488811] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:43.786 #44 NEW cov: 12225 ft: 14305 corp: 19/345b lim: 40 exec/s: 44 rss: 72Mb L: 30/30 MS: 1 CopyPart- 00:07:43.786 [2024-07-25 09:22:56.548186] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:f1f1f1f1 cdw11:f1f1f1f1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.786 [2024-07-25 09:22:56.548208] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.786 #45 NEW cov: 12225 ft: 14340 corp: 20/355b lim: 40 exec/s: 45 rss: 72Mb L: 10/30 MS: 1 EraseBytes- 00:07:44.044 [2024-07-25 09:22:56.608527] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:f1f1f1f1 cdw11:f1b1f1f1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.044 [2024-07-25 09:22:56.608552] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.044 #46 NEW cov: 12225 ft: 14416 corp: 21/370b lim: 40 exec/s: 46 rss: 72Mb L: 15/30 MS: 1 ShuffleBytes- 00:07:44.044 [2024-07-25 09:22:56.678969] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:f1f139f1 cdw11:f1f1b1f1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.044 [2024-07-25 09:22:56.678993] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.044 [2024-07-25 09:22:56.679081] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:f1b1f1f1 cdw11:0af1f1f1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.044 [2024-07-25 09:22:56.679096] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:44.044 #47 NEW cov: 12225 ft: 14454 corp: 22/390b lim: 40 exec/s: 47 rss: 72Mb L: 20/30 MS: 1 ChangeBit- 00:07:44.044 [2024-07-25 09:22:56.749326] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:f1f1f1f1 cdw11:f1f1f1f1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.044 [2024-07-25 09:22:56.749352] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.044 [2024-07-25 09:22:56.749455] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:f1b10af1 cdw11:f1f1f1f1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.044 [2024-07-25 09:22:56.749469] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:44.044 #48 NEW cov: 12225 ft: 14500 corp: 23/412b lim: 40 exec/s: 48 rss: 72Mb L: 22/30 MS: 1 CMP- DE: "\001\000\000\000"- 00:07:44.044 [2024-07-25 09:22:56.799523] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:f1f1f1f1 cdw11:0f0e0e0e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.044 [2024-07-25 09:22:56.799553] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.044 [2024-07-25 09:22:56.799645] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:01000000 cdw11:f1f10e4e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.044 [2024-07-25 09:22:56.799660] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:44.044 #49 NEW cov: 12225 ft: 14561 corp: 24/430b lim: 40 exec/s: 49 rss: 72Mb L: 18/30 MS: 1 PersAutoDict- DE: "\001\000\000\000"- 00:07:44.044 [2024-07-25 09:22:56.850295] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:fffff1f1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.044 [2024-07-25 09:22:56.850320] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.044 [2024-07-25 09:22:56.850411] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:f1f1f1f1 cdw11:f1f1f1b1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.044 [2024-07-25 09:22:56.850425] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:44.044 [2024-07-25 09:22:56.850528] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:0af1f1f1 cdw11:f1f1f10a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.044 [2024-07-25 09:22:56.850542] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:44.303 #50 NEW cov: 12225 ft: 14576 corp: 25/454b lim: 40 exec/s: 50 rss: 72Mb L: 24/30 MS: 1 InsertRepeatedBytes- 00:07:44.303 [2024-07-25 09:22:56.900364] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:f1f1f1f1 cdw11:0f0e0e0c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.303 [2024-07-25 09:22:56.900389] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.303 [2024-07-25 09:22:56.900487] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:01000000 cdw11:f1f10e4e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.303 [2024-07-25 09:22:56.900502] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:44.303 #51 NEW cov: 12225 ft: 14595 corp: 26/472b lim: 40 exec/s: 51 rss: 72Mb L: 18/30 MS: 1 ChangeBinInt- 00:07:44.303 [2024-07-25 09:22:56.970947] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:f1f1f1f1 cdw11:f1f1f1f1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.303 [2024-07-25 09:22:56.970970] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.303 [2024-07-25 09:22:56.971059] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:b1f1f1f1 cdw11:f1f1f1f1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.303 [2024-07-25 09:22:56.971076] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:44.303 #52 NEW cov: 12225 ft: 14647 corp: 27/489b lim: 40 exec/s: 52 rss: 72Mb L: 17/30 MS: 1 CopyPart- 00:07:44.303 [2024-07-25 09:22:57.021461] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:f1f1f1f1 cdw11:f1f1f1f1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.303 [2024-07-25 09:22:57.021484] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.303 [2024-07-25 09:22:57.021585] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:b1f1f1f1 cdw11:f1f1f1b1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.303 [2024-07-25 09:22:57.021598] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:44.303 [2024-07-25 09:22:57.021687] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:f1f10af1 cdw11:f1f1f1f1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.303 [2024-07-25 09:22:57.021700] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:44.303 #53 NEW cov: 12225 ft: 14671 corp: 28/519b lim: 40 exec/s: 53 rss: 72Mb L: 30/30 MS: 1 CrossOver- 00:07:44.303 [2024-07-25 09:22:57.071596] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:f1f1f1f1 cdw11:0f0e0e12 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.303 [2024-07-25 09:22:57.071619] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.303 [2024-07-25 09:22:57.071702] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:01000000 cdw11:f1f10e4e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.303 [2024-07-25 09:22:57.071715] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:44.303 #54 NEW cov: 12225 ft: 14695 corp: 29/537b lim: 40 exec/s: 54 rss: 73Mb L: 18/30 MS: 1 ChangeBinInt- 00:07:44.562 [2024-07-25 09:22:57.131990] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:f1f1f1f1 cdw11:f1f1f1f1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.562 [2024-07-25 09:22:57.132013] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.562 [2024-07-25 09:22:57.132102] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:f1b10af1 cdw11:f1f1f1f1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.562 [2024-07-25 09:22:57.132115] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:44.562 #55 NEW cov: 12225 ft: 14720 corp: 30/558b lim: 40 exec/s: 55 rss: 73Mb L: 21/30 MS: 1 CrossOver- 00:07:44.562 [2024-07-25 09:22:57.182208] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:f1f1f1f1 cdw11:f1f13ff1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.562 [2024-07-25 09:22:57.182232] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.562 [2024-07-25 09:22:57.182323] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:b1f1f1f1 cdw11:f1f1f1f1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.562 [2024-07-25 09:22:57.182337] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:44.562 #56 NEW cov: 12225 ft: 14724 corp: 31/575b lim: 40 exec/s: 56 rss: 73Mb L: 17/30 MS: 1 ChangeByte- 00:07:44.562 [2024-07-25 09:22:57.232605] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:f1f1f1f1 cdw11:f1f1f1f1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.562 [2024-07-25 09:22:57.232629] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.562 [2024-07-25 09:22:57.232720] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:f1b10af1 cdw11:f130f1f1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.562 [2024-07-25 09:22:57.232734] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:44.562 #57 NEW cov: 12225 ft: 14752 corp: 32/594b lim: 40 exec/s: 57 rss: 73Mb L: 19/30 MS: 1 InsertByte- 00:07:44.562 [2024-07-25 09:22:57.283505] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:f1f1ffff cdw11:ffffffef SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.562 [2024-07-25 09:22:57.283529] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.562 [2024-07-25 09:22:57.283626] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffd1f10f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.562 [2024-07-25 09:22:57.283643] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:44.562 [2024-07-25 09:22:57.283744] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:0e0e0e4e cdw11:0e0ef1f1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.562 [2024-07-25 09:22:57.283758] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:44.562 #58 NEW cov: 12225 ft: 14760 corp: 33/623b lim: 40 exec/s: 58 rss: 73Mb L: 29/30 MS: 1 ChangeBit- 00:07:44.563 [2024-07-25 09:22:57.343461] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:f1f1f1f1 cdw11:f1f1f1f1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.563 [2024-07-25 09:22:57.343485] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.563 [2024-07-25 09:22:57.343573] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:f1b10af1 cdw11:f1f1f1f1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.563 [2024-07-25 09:22:57.343587] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:44.821 #59 NEW cov: 12225 ft: 14778 corp: 34/644b lim: 40 exec/s: 29 rss: 73Mb L: 21/30 MS: 1 ShuffleBytes- 00:07:44.821 #59 DONE cov: 12225 ft: 14778 corp: 34/644b lim: 40 exec/s: 29 rss: 73Mb 00:07:44.821 ###### Recommended dictionary. ###### 00:07:44.821 "\001\000\000\000" # Uses: 1 00:07:44.821 ###### End of recommended dictionary. ###### 00:07:44.821 Done 59 runs in 2 second(s) 00:07:44.821 09:22:57 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_10.conf /var/tmp/suppress_nvmf_fuzz 00:07:44.821 09:22:57 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:44.821 09:22:57 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:44.821 09:22:57 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 11 1 0x1 00:07:44.821 09:22:57 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=11 00:07:44.821 09:22:57 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:44.821 09:22:57 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:44.821 09:22:57 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 00:07:44.821 09:22:57 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_11.conf 00:07:44.821 09:22:57 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:44.821 09:22:57 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:44.821 09:22:57 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 11 00:07:44.821 09:22:57 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4411 00:07:44.821 09:22:57 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 00:07:44.821 09:22:57 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4411' 00:07:44.821 09:22:57 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4411"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:44.821 09:22:57 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:44.821 09:22:57 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:44.821 09:22:57 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4411' -c /tmp/fuzz_json_11.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 -Z 11 00:07:44.821 [2024-07-25 09:22:57.526351] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:44.821 [2024-07-25 09:22:57.526433] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid420108 ] 00:07:44.821 EAL: No free 2048 kB hugepages reported on node 1 00:07:45.080 [2024-07-25 09:22:57.690423] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.080 [2024-07-25 09:22:57.754868] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.080 [2024-07-25 09:22:57.813083] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:45.080 [2024-07-25 09:22:57.829283] tcp.c:1007:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4411 *** 00:07:45.080 INFO: Running with entropic power schedule (0xFF, 100). 00:07:45.080 INFO: Seed: 302217280 00:07:45.080 INFO: Loaded 1 modules (359061 inline 8-bit counters): 359061 [0x29c768c, 0x2a1f121), 00:07:45.080 INFO: Loaded 1 PC tables (359061 PCs): 359061 [0x2a1f128,0x2f99a78), 00:07:45.080 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 00:07:45.080 INFO: A corpus is not provided, starting from an empty corpus 00:07:45.080 #2 INITED exec/s: 0 rss: 63Mb 00:07:45.080 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:45.080 This may also happen if the target rejected all inputs we tried so far 00:07:45.080 [2024-07-25 09:22:57.873999] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.080 [2024-07-25 09:22:57.874030] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.339 NEW_FUNC[1/701]: 0x492a60 in fuzz_admin_security_send_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:223 00:07:45.339 NEW_FUNC[2/701]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:45.339 #10 NEW cov: 12004 ft: 12002 corp: 2/11b lim: 40 exec/s: 0 rss: 71Mb L: 10/10 MS: 3 ChangeBinInt-ChangeBit-InsertRepeatedBytes- 00:07:45.339 [2024-07-25 09:22:58.044380] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.339 [2024-07-25 09:22:58.044414] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.339 #11 NEW cov: 12123 ft: 12536 corp: 3/21b lim: 40 exec/s: 0 rss: 71Mb L: 10/10 MS: 1 CopyPart- 00:07:45.339 [2024-07-25 09:22:58.134559] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.339 [2024-07-25 09:22:58.134589] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.598 #12 NEW cov: 12129 ft: 12750 corp: 4/31b lim: 40 exec/s: 0 rss: 71Mb L: 10/10 MS: 1 ChangeASCIIInt- 00:07:45.598 [2024-07-25 09:22:58.184669] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.598 [2024-07-25 09:22:58.184697] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.598 #13 NEW cov: 12214 ft: 13039 corp: 5/41b lim: 40 exec/s: 0 rss: 72Mb L: 10/10 MS: 1 CrossOver- 00:07:45.598 [2024-07-25 09:22:58.264896] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:7a000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.598 [2024-07-25 09:22:58.264924] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.598 #19 NEW cov: 12214 ft: 13261 corp: 6/51b lim: 40 exec/s: 0 rss: 72Mb L: 10/10 MS: 1 ChangeByte- 00:07:45.598 [2024-07-25 09:22:58.325010] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:000017ac cdw11:53a99cad SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.598 [2024-07-25 09:22:58.325045] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.598 #20 NEW cov: 12214 ft: 13322 corp: 7/61b lim: 40 exec/s: 0 rss: 72Mb L: 10/10 MS: 1 CMP- DE: "\000\027\254S\251\234\255\240"- 00:07:45.598 [2024-07-25 09:22:58.405291] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.598 [2024-07-25 09:22:58.405320] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.856 #21 NEW cov: 12214 ft: 13389 corp: 8/71b lim: 40 exec/s: 0 rss: 72Mb L: 10/10 MS: 1 ChangeByte- 00:07:45.856 [2024-07-25 09:22:58.455583] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:000017ac cdw11:53ffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.856 [2024-07-25 09:22:58.455610] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.856 [2024-07-25 09:22:58.455643] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.856 [2024-07-25 09:22:58.455656] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:45.856 [2024-07-25 09:22:58.455683] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.856 [2024-07-25 09:22:58.455696] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:45.856 [2024-07-25 09:22:58.455723] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.856 [2024-07-25 09:22:58.455735] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:45.856 #22 NEW cov: 12214 ft: 14263 corp: 9/108b lim: 40 exec/s: 0 rss: 72Mb L: 37/37 MS: 1 InsertRepeatedBytes- 00:07:45.856 [2024-07-25 09:22:58.535596] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a010000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.856 [2024-07-25 09:22:58.535623] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.856 #24 NEW cov: 12214 ft: 14351 corp: 10/116b lim: 40 exec/s: 0 rss: 72Mb L: 8/37 MS: 2 CMP-CrossOver- DE: "\001\000\000\000"- 00:07:45.856 [2024-07-25 09:22:58.595886] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000100 cdw11:17ac53ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.856 [2024-07-25 09:22:58.595913] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.856 [2024-07-25 09:22:58.595943] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.856 [2024-07-25 09:22:58.595972] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:45.856 [2024-07-25 09:22:58.595999] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.856 [2024-07-25 09:22:58.596012] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:45.856 [2024-07-25 09:22:58.596038] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.856 [2024-07-25 09:22:58.596051] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:46.115 #25 NEW cov: 12214 ft: 14429 corp: 11/155b lim: 40 exec/s: 0 rss: 72Mb L: 39/39 MS: 1 CMP- DE: "\000\001"- 00:07:46.115 [2024-07-25 09:22:58.686156] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000100 cdw11:17ac53ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.115 [2024-07-25 09:22:58.686183] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:46.115 [2024-07-25 09:22:58.686215] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.115 [2024-07-25 09:22:58.686229] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:46.115 [2024-07-25 09:22:58.686256] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.115 [2024-07-25 09:22:58.686269] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:46.115 [2024-07-25 09:22:58.686295] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.115 [2024-07-25 09:22:58.686308] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:46.115 #26 NEW cov: 12214 ft: 14493 corp: 12/194b lim: 40 exec/s: 0 rss: 72Mb L: 39/39 MS: 1 ShuffleBytes- 00:07:46.115 [2024-07-25 09:22:58.766376] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:27000100 cdw11:17ac53ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.115 [2024-07-25 09:22:58.766402] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:46.115 [2024-07-25 09:22:58.766433] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.115 [2024-07-25 09:22:58.766447] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:46.115 [2024-07-25 09:22:58.766473] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.115 [2024-07-25 09:22:58.766486] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:46.115 [2024-07-25 09:22:58.766512] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.115 [2024-07-25 09:22:58.766524] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:46.115 NEW_FUNC[1/1]: 0x1a8a050 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:07:46.115 #27 NEW cov: 12237 ft: 14596 corp: 13/233b lim: 40 exec/s: 0 rss: 72Mb L: 39/39 MS: 1 ChangeBinInt- 00:07:46.115 [2024-07-25 09:22:58.826343] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a010000 cdw11:00170000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.115 [2024-07-25 09:22:58.826368] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:46.115 #28 NEW cov: 12237 ft: 14633 corp: 14/243b lim: 40 exec/s: 28 rss: 72Mb L: 10/39 MS: 1 CMP- DE: "\000\027"- 00:07:46.115 [2024-07-25 09:22:58.906755] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000100 cdw11:17ac53ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.115 [2024-07-25 09:22:58.906782] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:46.115 [2024-07-25 09:22:58.906814] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.115 [2024-07-25 09:22:58.906831] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:46.115 [2024-07-25 09:22:58.906858] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.115 [2024-07-25 09:22:58.906870] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:46.115 [2024-07-25 09:22:58.906897] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.115 [2024-07-25 09:22:58.906909] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:46.373 #29 NEW cov: 12237 ft: 14701 corp: 15/282b lim: 40 exec/s: 29 rss: 72Mb L: 39/39 MS: 1 CopyPart- 00:07:46.373 [2024-07-25 09:22:58.996964] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000100 cdw11:17ac53ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.373 [2024-07-25 09:22:58.996990] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:46.373 [2024-07-25 09:22:58.997020] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.373 [2024-07-25 09:22:58.997033] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:46.373 [2024-07-25 09:22:58.997058] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:fffffffe cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.373 [2024-07-25 09:22:58.997075] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:46.373 [2024-07-25 09:22:58.997117] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.373 [2024-07-25 09:22:58.997129] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:46.373 #30 NEW cov: 12237 ft: 14740 corp: 16/321b lim: 40 exec/s: 30 rss: 72Mb L: 39/39 MS: 1 ChangeBit- 00:07:46.374 [2024-07-25 09:22:59.047058] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000100 cdw11:17ac53ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.374 [2024-07-25 09:22:59.047089] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:46.374 [2024-07-25 09:22:59.047120] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.374 [2024-07-25 09:22:59.047133] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:46.374 [2024-07-25 09:22:59.047158] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:fffffffe cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.374 [2024-07-25 09:22:59.047170] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:46.374 [2024-07-25 09:22:59.047195] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.374 [2024-07-25 09:22:59.047207] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:46.374 #31 NEW cov: 12237 ft: 14755 corp: 17/360b lim: 40 exec/s: 31 rss: 72Mb L: 39/39 MS: 1 ChangeASCIIInt- 00:07:46.374 [2024-07-25 09:22:59.127143] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a010000 cdw11:00170000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.374 [2024-07-25 09:22:59.127173] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:46.374 #32 NEW cov: 12237 ft: 14813 corp: 18/368b lim: 40 exec/s: 32 rss: 72Mb L: 8/39 MS: 1 PersAutoDict- DE: "\000\027"- 00:07:46.374 [2024-07-25 09:22:59.177233] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.374 [2024-07-25 09:22:59.177259] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:46.632 #33 NEW cov: 12237 ft: 14827 corp: 19/378b lim: 40 exec/s: 33 rss: 72Mb L: 10/39 MS: 1 ChangeBinInt- 00:07:46.632 [2024-07-25 09:22:59.227578] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00010017 cdw11:ac53ffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.632 [2024-07-25 09:22:59.227605] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:46.632 [2024-07-25 09:22:59.227635] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.632 [2024-07-25 09:22:59.227648] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:46.632 [2024-07-25 09:22:59.227674] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.632 [2024-07-25 09:22:59.227686] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:46.632 [2024-07-25 09:22:59.227712] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.632 [2024-07-25 09:22:59.227724] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:46.632 #34 NEW cov: 12237 ft: 14831 corp: 20/417b lim: 40 exec/s: 34 rss: 72Mb L: 39/39 MS: 1 CopyPart- 00:07:46.632 [2024-07-25 09:22:59.287530] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.632 [2024-07-25 09:22:59.287558] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:46.632 #35 NEW cov: 12237 ft: 14855 corp: 21/427b lim: 40 exec/s: 35 rss: 72Mb L: 10/39 MS: 1 ChangeASCIIInt- 00:07:46.632 [2024-07-25 09:22:59.337690] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:7a000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.633 [2024-07-25 09:22:59.337719] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:46.633 #36 NEW cov: 12237 ft: 14878 corp: 22/437b lim: 40 exec/s: 36 rss: 72Mb L: 10/39 MS: 1 ChangeASCIIInt- 00:07:46.633 [2024-07-25 09:22:59.417885] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00010a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.633 [2024-07-25 09:22:59.417911] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:46.891 #37 NEW cov: 12237 ft: 14884 corp: 23/447b lim: 40 exec/s: 37 rss: 72Mb L: 10/39 MS: 1 PersAutoDict- DE: "\000\001"- 00:07:46.891 [2024-07-25 09:22:59.498296] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000100 cdw11:17ac53ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.891 [2024-07-25 09:22:59.498323] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:46.891 [2024-07-25 09:22:59.498353] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.891 [2024-07-25 09:22:59.498369] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:46.891 [2024-07-25 09:22:59.498395] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:fffffffe cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.891 [2024-07-25 09:22:59.498407] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:46.891 [2024-07-25 09:22:59.498432] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.891 [2024-07-25 09:22:59.498444] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:46.891 #38 NEW cov: 12237 ft: 14912 corp: 24/486b lim: 40 exec/s: 38 rss: 72Mb L: 39/39 MS: 1 CopyPart- 00:07:46.891 [2024-07-25 09:22:59.558361] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.891 [2024-07-25 09:22:59.558387] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:46.891 [2024-07-25 09:22:59.558418] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000036 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.891 [2024-07-25 09:22:59.558430] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:46.891 #39 NEW cov: 12237 ft: 15142 corp: 25/503b lim: 40 exec/s: 39 rss: 73Mb L: 17/39 MS: 1 CopyPart- 00:07:46.891 [2024-07-25 09:22:59.648569] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:000000c5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.891 [2024-07-25 09:22:59.648596] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:46.891 [2024-07-25 09:22:59.648626] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:c5c5c5c5 cdw11:c5c5c5c5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.891 [2024-07-25 09:22:59.648639] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:46.891 #40 NEW cov: 12237 ft: 15144 corp: 26/524b lim: 40 exec/s: 40 rss: 73Mb L: 21/39 MS: 1 InsertRepeatedBytes- 00:07:47.150 [2024-07-25 09:22:59.708632] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a010000 cdw11:3d170000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.150 [2024-07-25 09:22:59.708658] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:47.150 #41 NEW cov: 12237 ft: 15252 corp: 27/534b lim: 40 exec/s: 41 rss: 73Mb L: 10/39 MS: 1 ChangeByte- 00:07:47.150 [2024-07-25 09:22:59.788899] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:0000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.150 [2024-07-25 09:22:59.788926] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:47.150 #42 NEW cov: 12237 ft: 15278 corp: 28/544b lim: 40 exec/s: 42 rss: 73Mb L: 10/39 MS: 1 CrossOver- 00:07:47.150 [2024-07-25 09:22:59.839238] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000aff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.150 [2024-07-25 09:22:59.839265] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:47.150 [2024-07-25 09:22:59.839297] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.150 [2024-07-25 09:22:59.839315] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:47.150 [2024-07-25 09:22:59.839343] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.150 [2024-07-25 09:22:59.839356] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:47.150 [2024-07-25 09:22:59.839383] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffff0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.150 [2024-07-25 09:22:59.839396] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:47.150 #43 NEW cov: 12237 ft: 15306 corp: 29/581b lim: 40 exec/s: 21 rss: 73Mb L: 37/39 MS: 1 InsertRepeatedBytes- 00:07:47.150 #43 DONE cov: 12237 ft: 15306 corp: 29/581b lim: 40 exec/s: 21 rss: 73Mb 00:07:47.150 ###### Recommended dictionary. ###### 00:07:47.150 "\000\027\254S\251\234\255\240" # Uses: 0 00:07:47.150 "\001\000\000\000" # Uses: 0 00:07:47.150 "\000\001" # Uses: 1 00:07:47.150 "\000\027" # Uses: 1 00:07:47.150 ###### End of recommended dictionary. ###### 00:07:47.150 Done 43 runs in 2 second(s) 00:07:47.409 09:23:00 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_11.conf /var/tmp/suppress_nvmf_fuzz 00:07:47.409 09:23:00 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:47.409 09:23:00 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:47.409 09:23:00 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 12 1 0x1 00:07:47.409 09:23:00 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=12 00:07:47.409 09:23:00 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:47.409 09:23:00 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:47.409 09:23:00 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 00:07:47.409 09:23:00 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_12.conf 00:07:47.409 09:23:00 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:47.409 09:23:00 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:47.409 09:23:00 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 12 00:07:47.409 09:23:00 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4412 00:07:47.409 09:23:00 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 00:07:47.409 09:23:00 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4412' 00:07:47.409 09:23:00 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4412"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:47.409 09:23:00 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:47.409 09:23:00 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:47.409 09:23:00 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4412' -c /tmp/fuzz_json_12.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 -Z 12 00:07:47.409 [2024-07-25 09:23:00.058819] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:47.409 [2024-07-25 09:23:00.058893] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid420537 ] 00:07:47.409 EAL: No free 2048 kB hugepages reported on node 1 00:07:47.668 [2024-07-25 09:23:00.225368] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.668 [2024-07-25 09:23:00.291574] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.668 [2024-07-25 09:23:00.350431] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:47.668 [2024-07-25 09:23:00.366653] tcp.c:1007:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4412 *** 00:07:47.668 INFO: Running with entropic power schedule (0xFF, 100). 00:07:47.668 INFO: Seed: 2839223517 00:07:47.668 INFO: Loaded 1 modules (359061 inline 8-bit counters): 359061 [0x29c768c, 0x2a1f121), 00:07:47.668 INFO: Loaded 1 PC tables (359061 PCs): 359061 [0x2a1f128,0x2f99a78), 00:07:47.668 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 00:07:47.668 INFO: A corpus is not provided, starting from an empty corpus 00:07:47.668 #2 INITED exec/s: 0 rss: 63Mb 00:07:47.668 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:47.668 This may also happen if the target rejected all inputs we tried so far 00:07:47.668 [2024-07-25 09:23:00.434794] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:5d000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.668 [2024-07-25 09:23:00.434826] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:47.668 [2024-07-25 09:23:00.434920] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.668 [2024-07-25 09:23:00.434935] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:47.668 [2024-07-25 09:23:00.435032] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.668 [2024-07-25 09:23:00.435048] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:47.668 [2024-07-25 09:23:00.435159] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.668 [2024-07-25 09:23:00.435173] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:47.927 NEW_FUNC[1/701]: 0x4947d0 in fuzz_admin_directive_send_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:241 00:07:47.927 NEW_FUNC[2/701]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:47.927 #4 NEW cov: 12008 ft: 12008 corp: 2/34b lim: 40 exec/s: 0 rss: 71Mb L: 33/33 MS: 2 ChangeByte-InsertRepeatedBytes- 00:07:47.927 [2024-07-25 09:23:00.604913] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:5d000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.927 [2024-07-25 09:23:00.604947] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:47.927 [2024-07-25 09:23:00.605034] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.927 [2024-07-25 09:23:00.605060] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:47.927 [2024-07-25 09:23:00.605151] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.927 [2024-07-25 09:23:00.605165] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:47.927 [2024-07-25 09:23:00.605268] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.927 [2024-07-25 09:23:00.605282] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:47.927 #10 NEW cov: 12121 ft: 12612 corp: 3/67b lim: 40 exec/s: 0 rss: 71Mb L: 33/33 MS: 1 ShuffleBytes- 00:07:47.927 [2024-07-25 09:23:00.675202] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:5d000000 cdw11:5b000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.927 [2024-07-25 09:23:00.675229] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:47.927 [2024-07-25 09:23:00.675334] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.927 [2024-07-25 09:23:00.675346] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:47.927 [2024-07-25 09:23:00.675434] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.927 [2024-07-25 09:23:00.675448] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:47.927 [2024-07-25 09:23:00.675541] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.927 [2024-07-25 09:23:00.675555] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:47.927 #11 NEW cov: 12127 ft: 12800 corp: 4/101b lim: 40 exec/s: 0 rss: 72Mb L: 34/34 MS: 1 InsertByte- 00:07:48.186 [2024-07-25 09:23:00.745496] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:5d000000 cdw11:5b000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.186 [2024-07-25 09:23:00.745523] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:48.186 [2024-07-25 09:23:00.745620] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.186 [2024-07-25 09:23:00.745634] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:48.186 [2024-07-25 09:23:00.745727] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.186 [2024-07-25 09:23:00.745741] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:48.186 [2024-07-25 09:23:00.745830] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.186 [2024-07-25 09:23:00.745844] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:48.186 #12 NEW cov: 12212 ft: 13079 corp: 5/135b lim: 40 exec/s: 0 rss: 72Mb L: 34/34 MS: 1 ChangeBit- 00:07:48.186 [2024-07-25 09:23:00.815737] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:5d000000 cdw11:00000021 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.186 [2024-07-25 09:23:00.815764] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:48.186 [2024-07-25 09:23:00.815860] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.186 [2024-07-25 09:23:00.815875] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:48.186 [2024-07-25 09:23:00.815965] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.186 [2024-07-25 09:23:00.815978] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:48.186 [2024-07-25 09:23:00.816074] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.186 [2024-07-25 09:23:00.816089] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:48.186 #18 NEW cov: 12212 ft: 13141 corp: 6/168b lim: 40 exec/s: 0 rss: 72Mb L: 33/34 MS: 1 ChangeBinInt- 00:07:48.186 [2024-07-25 09:23:00.865838] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:5d000000 cdw11:00100000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.186 [2024-07-25 09:23:00.865864] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:48.186 [2024-07-25 09:23:00.865957] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.186 [2024-07-25 09:23:00.865971] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:48.186 [2024-07-25 09:23:00.866056] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.186 [2024-07-25 09:23:00.866073] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:48.186 [2024-07-25 09:23:00.866168] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.186 [2024-07-25 09:23:00.866181] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:48.186 #19 NEW cov: 12212 ft: 13182 corp: 7/201b lim: 40 exec/s: 0 rss: 72Mb L: 33/34 MS: 1 ChangeBit- 00:07:48.187 [2024-07-25 09:23:00.916187] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:5d000000 cdw11:005b0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.187 [2024-07-25 09:23:00.916212] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:48.187 [2024-07-25 09:23:00.916305] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.187 [2024-07-25 09:23:00.916318] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:48.187 [2024-07-25 09:23:00.916406] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.187 [2024-07-25 09:23:00.916419] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:48.187 [2024-07-25 09:23:00.916510] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.187 [2024-07-25 09:23:00.916523] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:48.187 #20 NEW cov: 12212 ft: 13287 corp: 8/235b lim: 40 exec/s: 0 rss: 72Mb L: 34/34 MS: 1 ShuffleBytes- 00:07:48.187 [2024-07-25 09:23:00.986610] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:5d2b0000 cdw11:5b000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.187 [2024-07-25 09:23:00.986635] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:48.187 [2024-07-25 09:23:00.986729] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.187 [2024-07-25 09:23:00.986742] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:48.187 [2024-07-25 09:23:00.986827] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.187 [2024-07-25 09:23:00.986839] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:48.187 [2024-07-25 09:23:00.986926] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.187 [2024-07-25 09:23:00.986939] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:48.446 #21 NEW cov: 12212 ft: 13378 corp: 9/269b lim: 40 exec/s: 0 rss: 72Mb L: 34/34 MS: 1 ChangeByte- 00:07:48.446 [2024-07-25 09:23:01.036073] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:5d000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.446 [2024-07-25 09:23:01.036097] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:48.446 [2024-07-25 09:23:01.036188] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.446 [2024-07-25 09:23:01.036201] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:48.446 #22 NEW cov: 12212 ft: 13769 corp: 10/288b lim: 40 exec/s: 0 rss: 72Mb L: 19/34 MS: 1 EraseBytes- 00:07:48.446 [2024-07-25 09:23:01.086909] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:5d000000 cdw11:005b0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.446 [2024-07-25 09:23:01.086932] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:48.446 [2024-07-25 09:23:01.087023] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.446 [2024-07-25 09:23:01.087036] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:48.446 [2024-07-25 09:23:01.087131] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.446 [2024-07-25 09:23:01.087144] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:48.446 [2024-07-25 09:23:01.087232] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.446 [2024-07-25 09:23:01.087245] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:48.446 #23 NEW cov: 12212 ft: 13846 corp: 11/322b lim: 40 exec/s: 0 rss: 72Mb L: 34/34 MS: 1 ShuffleBytes- 00:07:48.446 [2024-07-25 09:23:01.137241] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:5d000000 cdw11:00100000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.446 [2024-07-25 09:23:01.137264] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:48.446 [2024-07-25 09:23:01.137360] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000009 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.446 [2024-07-25 09:23:01.137384] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:48.446 [2024-07-25 09:23:01.137478] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.446 [2024-07-25 09:23:01.137490] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:48.446 [2024-07-25 09:23:01.137577] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.446 [2024-07-25 09:23:01.137592] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:48.446 #24 NEW cov: 12212 ft: 13870 corp: 12/355b lim: 40 exec/s: 0 rss: 72Mb L: 33/34 MS: 1 ChangeBinInt- 00:07:48.446 [2024-07-25 09:23:01.197450] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:5d000000 cdw11:005b0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.446 [2024-07-25 09:23:01.197473] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:48.446 [2024-07-25 09:23:01.197553] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.446 [2024-07-25 09:23:01.197566] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:48.446 [2024-07-25 09:23:01.197652] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.446 [2024-07-25 09:23:01.197664] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:48.446 [2024-07-25 09:23:01.197748] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000800 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.446 [2024-07-25 09:23:01.197761] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:48.446 #25 NEW cov: 12212 ft: 13911 corp: 13/389b lim: 40 exec/s: 0 rss: 72Mb L: 34/34 MS: 1 ChangeBit- 00:07:48.705 [2024-07-25 09:23:01.257676] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:5d2b0000 cdw11:5b000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.705 [2024-07-25 09:23:01.257700] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:48.705 [2024-07-25 09:23:01.257783] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.705 [2024-07-25 09:23:01.257796] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:48.705 [2024-07-25 09:23:01.257887] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.705 [2024-07-25 09:23:01.257900] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:48.705 [2024-07-25 09:23:01.257984] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.705 [2024-07-25 09:23:01.257996] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:48.705 #26 NEW cov: 12212 ft: 13939 corp: 14/423b lim: 40 exec/s: 0 rss: 72Mb L: 34/34 MS: 1 ShuffleBytes- 00:07:48.705 [2024-07-25 09:23:01.317851] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:5d000000 cdw11:005b0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.705 [2024-07-25 09:23:01.317874] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:48.705 [2024-07-25 09:23:01.317971] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.705 [2024-07-25 09:23:01.317985] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:48.705 [2024-07-25 09:23:01.318103] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.705 [2024-07-25 09:23:01.318118] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:48.706 [2024-07-25 09:23:01.318207] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.706 [2024-07-25 09:23:01.318220] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:48.706 NEW_FUNC[1/1]: 0x1a8a050 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:07:48.706 #27 NEW cov: 12235 ft: 13987 corp: 15/458b lim: 40 exec/s: 0 rss: 72Mb L: 35/35 MS: 1 InsertByte- 00:07:48.706 [2024-07-25 09:23:01.387495] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:5d000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.706 [2024-07-25 09:23:01.387520] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:48.706 [2024-07-25 09:23:01.387620] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:0000000a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.706 [2024-07-25 09:23:01.387634] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:48.706 #28 NEW cov: 12235 ft: 14000 corp: 16/477b lim: 40 exec/s: 28 rss: 72Mb L: 19/35 MS: 1 ChangeBinInt- 00:07:48.706 [2024-07-25 09:23:01.458591] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:5d000000 cdw11:00100000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.706 [2024-07-25 09:23:01.458616] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:48.706 [2024-07-25 09:23:01.458738] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.706 [2024-07-25 09:23:01.458752] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:48.706 [2024-07-25 09:23:01.458843] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:ff000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.706 [2024-07-25 09:23:01.458856] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:48.706 [2024-07-25 09:23:01.458948] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.706 [2024-07-25 09:23:01.458963] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:48.706 #29 NEW cov: 12235 ft: 14008 corp: 17/515b lim: 40 exec/s: 29 rss: 72Mb L: 38/38 MS: 1 InsertRepeatedBytes- 00:07:48.706 [2024-07-25 09:23:01.508815] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:5d000000 cdw11:00000021 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.706 [2024-07-25 09:23:01.508837] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:48.706 [2024-07-25 09:23:01.508929] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00002100 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.706 [2024-07-25 09:23:01.508942] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:48.706 [2024-07-25 09:23:01.509023] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.706 [2024-07-25 09:23:01.509039] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:48.706 [2024-07-25 09:23:01.509126] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.706 [2024-07-25 09:23:01.509140] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:48.964 #30 NEW cov: 12235 ft: 14075 corp: 18/548b lim: 40 exec/s: 30 rss: 72Mb L: 33/38 MS: 1 ChangeBinInt- 00:07:48.964 [2024-07-25 09:23:01.569104] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:5d000000 cdw11:00ff16ac SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.964 [2024-07-25 09:23:01.569127] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:48.964 [2024-07-25 09:23:01.569214] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:559c1037 cdw11:56000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.964 [2024-07-25 09:23:01.569227] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:48.964 [2024-07-25 09:23:01.569317] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.964 [2024-07-25 09:23:01.569330] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:48.964 [2024-07-25 09:23:01.569423] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.964 [2024-07-25 09:23:01.569437] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:48.964 #31 NEW cov: 12235 ft: 14118 corp: 19/581b lim: 40 exec/s: 31 rss: 72Mb L: 33/38 MS: 1 CMP- DE: "\377\026\254U\234\0207V"- 00:07:48.964 [2024-07-25 09:23:01.619294] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:5d000000 cdw11:5b000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.964 [2024-07-25 09:23:01.619317] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:48.964 [2024-07-25 09:23:01.619401] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000026 cdw11:08000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.964 [2024-07-25 09:23:01.619413] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:48.964 [2024-07-25 09:23:01.619499] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.964 [2024-07-25 09:23:01.619512] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:48.964 [2024-07-25 09:23:01.619601] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.964 [2024-07-25 09:23:01.619614] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:48.964 #32 NEW cov: 12235 ft: 14137 corp: 20/616b lim: 40 exec/s: 32 rss: 72Mb L: 35/38 MS: 1 InsertByte- 00:07:48.964 [2024-07-25 09:23:01.669500] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:ff16ac55 cdw11:9c103756 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.964 [2024-07-25 09:23:01.669524] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:48.964 [2024-07-25 09:23:01.669607] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.964 [2024-07-25 09:23:01.669635] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:48.965 [2024-07-25 09:23:01.669722] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.965 [2024-07-25 09:23:01.669735] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:48.965 [2024-07-25 09:23:01.669823] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.965 [2024-07-25 09:23:01.669837] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:48.965 #33 NEW cov: 12235 ft: 14178 corp: 21/650b lim: 40 exec/s: 33 rss: 72Mb L: 34/38 MS: 1 PersAutoDict- DE: "\377\026\254U\234\0207V"- 00:07:48.965 [2024-07-25 09:23:01.719555] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:5d000000 cdw11:005b0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.965 [2024-07-25 09:23:01.719578] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:48.965 [2024-07-25 09:23:01.719667] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.965 [2024-07-25 09:23:01.719681] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:48.965 [2024-07-25 09:23:01.719765] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.965 [2024-07-25 09:23:01.719779] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:48.965 [2024-07-25 09:23:01.719858] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.965 [2024-07-25 09:23:01.719872] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:48.965 #34 NEW cov: 12235 ft: 14193 corp: 22/685b lim: 40 exec/s: 34 rss: 72Mb L: 35/38 MS: 1 CopyPart- 00:07:49.224 [2024-07-25 09:23:01.779897] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:5d000000 cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.224 [2024-07-25 09:23:01.779922] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:49.224 [2024-07-25 09:23:01.780005] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:5b000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.224 [2024-07-25 09:23:01.780018] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:49.224 [2024-07-25 09:23:01.780103] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.224 [2024-07-25 09:23:01.780117] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:49.224 [2024-07-25 09:23:01.780198] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.224 [2024-07-25 09:23:01.780210] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:49.224 #35 NEW cov: 12235 ft: 14198 corp: 23/723b lim: 40 exec/s: 35 rss: 73Mb L: 38/38 MS: 1 InsertRepeatedBytes- 00:07:49.224 [2024-07-25 09:23:01.830089] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:5d000000 cdw11:00100000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.224 [2024-07-25 09:23:01.830112] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:49.224 [2024-07-25 09:23:01.830194] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000009 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.224 [2024-07-25 09:23:01.830208] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:49.224 [2024-07-25 09:23:01.830295] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:80000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.224 [2024-07-25 09:23:01.830308] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:49.224 [2024-07-25 09:23:01.830396] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.224 [2024-07-25 09:23:01.830409] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:49.224 #36 NEW cov: 12235 ft: 14216 corp: 24/756b lim: 40 exec/s: 36 rss: 73Mb L: 33/38 MS: 1 ChangeBit- 00:07:49.224 [2024-07-25 09:23:01.890323] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:5d000000 cdw11:5b000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.224 [2024-07-25 09:23:01.890345] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:49.224 [2024-07-25 09:23:01.890428] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000026 cdw11:08000001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.224 [2024-07-25 09:23:01.890441] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:49.224 [2024-07-25 09:23:01.890534] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.224 [2024-07-25 09:23:01.890548] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:49.224 [2024-07-25 09:23:01.890635] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.224 [2024-07-25 09:23:01.890649] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:49.225 #37 NEW cov: 12235 ft: 14231 corp: 25/791b lim: 40 exec/s: 37 rss: 73Mb L: 35/38 MS: 1 CMP- DE: "\001\000\000\000\000\000\000\000"- 00:07:49.225 [2024-07-25 09:23:01.950634] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:5d000000 cdw11:5b000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.225 [2024-07-25 09:23:01.950657] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:49.225 [2024-07-25 09:23:01.950741] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000026 cdw11:08000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.225 [2024-07-25 09:23:01.950753] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:49.225 [2024-07-25 09:23:01.950841] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.225 [2024-07-25 09:23:01.950854] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:49.225 [2024-07-25 09:23:01.950937] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:fffc0000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.225 [2024-07-25 09:23:01.950952] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:49.225 #38 NEW cov: 12235 ft: 14234 corp: 26/826b lim: 40 exec/s: 38 rss: 73Mb L: 35/38 MS: 1 ChangeBinInt- 00:07:49.225 [2024-07-25 09:23:02.000847] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:5d000000 cdw11:5b000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.225 [2024-07-25 09:23:02.000871] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:49.225 [2024-07-25 09:23:02.000955] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000026 cdw11:08000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.225 [2024-07-25 09:23:02.000968] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:49.225 [2024-07-25 09:23:02.001061] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.225 [2024-07-25 09:23:02.001079] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:49.225 [2024-07-25 09:23:02.001168] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:fffc0000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.225 [2024-07-25 09:23:02.001183] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:49.483 #39 NEW cov: 12235 ft: 14260 corp: 27/861b lim: 40 exec/s: 39 rss: 73Mb L: 35/38 MS: 1 ChangeByte- 00:07:49.483 [2024-07-25 09:23:02.061149] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:5d000000 cdw11:5b000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.483 [2024-07-25 09:23:02.061173] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:49.483 [2024-07-25 09:23:02.061274] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.483 [2024-07-25 09:23:02.061286] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:49.483 [2024-07-25 09:23:02.061373] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.483 [2024-07-25 09:23:02.061385] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:49.483 [2024-07-25 09:23:02.061472] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.483 [2024-07-25 09:23:02.061484] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:49.483 #40 NEW cov: 12235 ft: 14281 corp: 28/896b lim: 40 exec/s: 40 rss: 73Mb L: 35/38 MS: 1 InsertByte- 00:07:49.483 [2024-07-25 09:23:02.111571] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:5d000000 cdw11:5b000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.483 [2024-07-25 09:23:02.111595] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:49.483 [2024-07-25 09:23:02.111702] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000026 cdw11:00080001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.483 [2024-07-25 09:23:02.111714] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:49.484 [2024-07-25 09:23:02.111807] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.484 [2024-07-25 09:23:02.111824] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:49.484 [2024-07-25 09:23:02.111909] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.484 [2024-07-25 09:23:02.111923] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:49.484 #41 NEW cov: 12235 ft: 14303 corp: 29/931b lim: 40 exec/s: 41 rss: 73Mb L: 35/38 MS: 1 ShuffleBytes- 00:07:49.484 [2024-07-25 09:23:02.181909] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:5d000000 cdw11:00000021 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.484 [2024-07-25 09:23:02.181935] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:49.484 [2024-07-25 09:23:02.182019] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.484 [2024-07-25 09:23:02.182034] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:49.484 [2024-07-25 09:23:02.182126] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.484 [2024-07-25 09:23:02.182142] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:49.484 [2024-07-25 09:23:02.182224] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.484 [2024-07-25 09:23:02.182238] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:49.484 #42 NEW cov: 12235 ft: 14310 corp: 30/964b lim: 40 exec/s: 42 rss: 73Mb L: 33/38 MS: 1 ShuffleBytes- 00:07:49.484 [2024-07-25 09:23:02.232090] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:5d000000 cdw11:5b000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.484 [2024-07-25 09:23:02.232115] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:49.484 [2024-07-25 09:23:02.232206] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.484 [2024-07-25 09:23:02.232219] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:49.484 [2024-07-25 09:23:02.232303] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.484 [2024-07-25 09:23:02.232315] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:49.484 [2024-07-25 09:23:02.232413] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.484 [2024-07-25 09:23:02.232426] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:49.484 #43 NEW cov: 12235 ft: 14372 corp: 31/998b lim: 40 exec/s: 43 rss: 73Mb L: 34/38 MS: 1 ShuffleBytes- 00:07:49.484 [2024-07-25 09:23:02.282411] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:5d000000 cdw11:00000021 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.484 [2024-07-25 09:23:02.282435] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:49.484 [2024-07-25 09:23:02.282525] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.484 [2024-07-25 09:23:02.282538] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:49.484 [2024-07-25 09:23:02.282623] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00ffffff cdw11:ffffff03 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.484 [2024-07-25 09:23:02.282637] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:49.484 [2024-07-25 09:23:02.282732] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.484 [2024-07-25 09:23:02.282747] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:49.742 #44 NEW cov: 12235 ft: 14401 corp: 32/1031b lim: 40 exec/s: 44 rss: 73Mb L: 33/38 MS: 1 CMP- DE: "\377\377\377\377\377\377\003\000"- 00:07:49.743 [2024-07-25 09:23:02.332742] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:5d000000 cdw11:5b000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.743 [2024-07-25 09:23:02.332767] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:49.743 [2024-07-25 09:23:02.332859] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:0000003f cdw11:26080000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.743 [2024-07-25 09:23:02.332873] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:49.743 [2024-07-25 09:23:02.332960] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:01000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.743 [2024-07-25 09:23:02.332972] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:49.743 [2024-07-25 09:23:02.333066] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.743 [2024-07-25 09:23:02.333083] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:49.743 #45 NEW cov: 12235 ft: 14415 corp: 33/1067b lim: 40 exec/s: 45 rss: 73Mb L: 36/38 MS: 1 InsertByte- 00:07:49.743 [2024-07-25 09:23:02.383205] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:5d000000 cdw11:5b000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.743 [2024-07-25 09:23:02.383229] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:49.743 [2024-07-25 09:23:02.383333] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.743 [2024-07-25 09:23:02.383347] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:49.743 [2024-07-25 09:23:02.383440] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.743 [2024-07-25 09:23:02.383454] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:49.743 [2024-07-25 09:23:02.383553] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.743 [2024-07-25 09:23:02.383567] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:49.743 #46 NEW cov: 12235 ft: 14425 corp: 34/1102b lim: 40 exec/s: 23 rss: 73Mb L: 35/38 MS: 1 ShuffleBytes- 00:07:49.743 #46 DONE cov: 12235 ft: 14425 corp: 34/1102b lim: 40 exec/s: 23 rss: 73Mb 00:07:49.743 ###### Recommended dictionary. ###### 00:07:49.743 "\377\026\254U\234\0207V" # Uses: 1 00:07:49.743 "\001\000\000\000\000\000\000\000" # Uses: 0 00:07:49.743 "\377\377\377\377\377\377\003\000" # Uses: 0 00:07:49.743 ###### End of recommended dictionary. ###### 00:07:49.743 Done 46 runs in 2 second(s) 00:07:49.743 09:23:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_12.conf /var/tmp/suppress_nvmf_fuzz 00:07:49.743 09:23:02 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:49.743 09:23:02 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:49.743 09:23:02 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 13 1 0x1 00:07:49.743 09:23:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=13 00:07:49.743 09:23:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:49.743 09:23:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:49.743 09:23:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 00:07:49.743 09:23:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_13.conf 00:07:49.743 09:23:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:49.743 09:23:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:49.743 09:23:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 13 00:07:49.743 09:23:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4413 00:07:49.743 09:23:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 00:07:49.743 09:23:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4413' 00:07:49.743 09:23:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4413"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:49.743 09:23:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:49.743 09:23:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:49.743 09:23:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4413' -c /tmp/fuzz_json_13.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 -Z 13 00:07:50.001 [2024-07-25 09:23:02.564854] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:50.001 [2024-07-25 09:23:02.564919] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid420972 ] 00:07:50.001 EAL: No free 2048 kB hugepages reported on node 1 00:07:50.001 [2024-07-25 09:23:02.730733] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.001 [2024-07-25 09:23:02.794930] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.260 [2024-07-25 09:23:02.853506] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:50.260 [2024-07-25 09:23:02.869710] tcp.c:1007:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4413 *** 00:07:50.260 INFO: Running with entropic power schedule (0xFF, 100). 00:07:50.260 INFO: Seed: 1046262369 00:07:50.260 INFO: Loaded 1 modules (359061 inline 8-bit counters): 359061 [0x29c768c, 0x2a1f121), 00:07:50.260 INFO: Loaded 1 PC tables (359061 PCs): 359061 [0x2a1f128,0x2f99a78), 00:07:50.260 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 00:07:50.260 INFO: A corpus is not provided, starting from an empty corpus 00:07:50.260 #2 INITED exec/s: 0 rss: 63Mb 00:07:50.260 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:50.260 This may also happen if the target rejected all inputs we tried so far 00:07:50.260 [2024-07-25 09:23:02.914561] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.260 [2024-07-25 09:23:02.914590] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.260 [2024-07-25 09:23:02.914622] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.260 [2024-07-25 09:23:02.914635] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:50.260 [2024-07-25 09:23:02.914662] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.260 [2024-07-25 09:23:02.914674] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:50.260 [2024-07-25 09:23:02.914700] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.260 [2024-07-25 09:23:02.914712] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:50.518 NEW_FUNC[1/700]: 0x496390 in fuzz_admin_directive_receive_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:257 00:07:50.518 NEW_FUNC[2/700]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:50.518 #12 NEW cov: 11992 ft: 11989 corp: 2/36b lim: 40 exec/s: 0 rss: 71Mb L: 35/35 MS: 5 ChangeByte-CopyPart-CrossOver-InsertByte-InsertRepeatedBytes- 00:07:50.518 [2024-07-25 09:23:03.086195] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.519 [2024-07-25 09:23:03.086229] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.519 [2024-07-25 09:23:03.086294] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.519 [2024-07-25 09:23:03.086308] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:50.519 [2024-07-25 09:23:03.086369] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.519 [2024-07-25 09:23:03.086381] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:50.519 [2024-07-25 09:23:03.086443] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:e0e0e0ff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.519 [2024-07-25 09:23:03.086457] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:50.519 #13 NEW cov: 12109 ft: 12446 corp: 3/74b lim: 40 exec/s: 0 rss: 71Mb L: 38/38 MS: 1 InsertRepeatedBytes- 00:07:50.519 [2024-07-25 09:23:03.146105] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.519 [2024-07-25 09:23:03.146129] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.519 [2024-07-25 09:23:03.146204] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ff090000 cdw11:00ffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.519 [2024-07-25 09:23:03.146217] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:50.519 [2024-07-25 09:23:03.146277] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.519 [2024-07-25 09:23:03.146288] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:50.519 [2024-07-25 09:23:03.146344] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:e0e0e0ff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.519 [2024-07-25 09:23:03.146355] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:50.519 #14 NEW cov: 12115 ft: 12933 corp: 4/112b lim: 40 exec/s: 0 rss: 71Mb L: 38/38 MS: 1 CMP- DE: "\011\000\000\000"- 00:07:50.519 [2024-07-25 09:23:03.195846] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:09000000 cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.519 [2024-07-25 09:23:03.195868] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.519 #16 NEW cov: 12200 ft: 13829 corp: 5/121b lim: 40 exec/s: 0 rss: 71Mb L: 9/38 MS: 2 PersAutoDict-InsertRepeatedBytes- DE: "\011\000\000\000"- 00:07:50.519 [2024-07-25 09:23:03.235944] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:09090000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.519 [2024-07-25 09:23:03.235966] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.519 #20 NEW cov: 12200 ft: 13983 corp: 6/130b lim: 40 exec/s: 0 rss: 71Mb L: 9/38 MS: 4 ChangeByte-CopyPart-PersAutoDict-CopyPart- DE: "\011\000\000\000"- 00:07:50.519 [2024-07-25 09:23:03.276067] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:09000000 cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.519 [2024-07-25 09:23:03.276093] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.519 #21 NEW cov: 12200 ft: 14038 corp: 7/139b lim: 40 exec/s: 0 rss: 72Mb L: 9/38 MS: 1 ChangeByte- 00:07:50.519 [2024-07-25 09:23:03.326276] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:09090000 cdw11:00ffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.519 [2024-07-25 09:23:03.326300] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.776 #27 NEW cov: 12200 ft: 14138 corp: 8/148b lim: 40 exec/s: 0 rss: 72Mb L: 9/38 MS: 1 CopyPart- 00:07:50.776 [2024-07-25 09:23:03.366370] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:09090000 cdw11:00000009 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.776 [2024-07-25 09:23:03.366394] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.776 #28 NEW cov: 12200 ft: 14198 corp: 9/159b lim: 40 exec/s: 0 rss: 72Mb L: 11/38 MS: 1 CopyPart- 00:07:50.776 [2024-07-25 09:23:03.416488] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:090d0000 cdw11:00ffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.776 [2024-07-25 09:23:03.416511] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.776 #29 NEW cov: 12200 ft: 14255 corp: 10/168b lim: 40 exec/s: 0 rss: 72Mb L: 9/38 MS: 1 ChangeBit- 00:07:50.776 [2024-07-25 09:23:03.467028] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.776 [2024-07-25 09:23:03.467051] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.776 [2024-07-25 09:23:03.467106] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ff090000 cdw11:0032ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.776 [2024-07-25 09:23:03.467122] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:50.776 [2024-07-25 09:23:03.467180] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.776 [2024-07-25 09:23:03.467191] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:50.776 [2024-07-25 09:23:03.467248] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:e0e0e0ff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.776 [2024-07-25 09:23:03.467259] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:50.776 #30 NEW cov: 12200 ft: 14334 corp: 11/206b lim: 40 exec/s: 0 rss: 72Mb L: 38/38 MS: 1 ChangeByte- 00:07:50.776 [2024-07-25 09:23:03.517321] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.776 [2024-07-25 09:23:03.517344] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.776 [2024-07-25 09:23:03.517401] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.776 [2024-07-25 09:23:03.517413] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:50.776 [2024-07-25 09:23:03.517468] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.776 [2024-07-25 09:23:03.517478] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:50.776 [2024-07-25 09:23:03.517535] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ffffe0e0 cdw11:e0ffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.776 [2024-07-25 09:23:03.517546] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:50.776 [2024-07-25 09:23:03.517599] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:8 nsid:0 cdw10:ffffffff cdw11:ff0fcb0a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.776 [2024-07-25 09:23:03.517610] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:50.776 #31 NEW cov: 12200 ft: 14467 corp: 12/246b lim: 40 exec/s: 0 rss: 72Mb L: 40/40 MS: 1 CopyPart- 00:07:50.776 [2024-07-25 09:23:03.557285] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:090d0000 cdw11:00ffff00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.776 [2024-07-25 09:23:03.557308] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.776 [2024-07-25 09:23:03.557367] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.776 [2024-07-25 09:23:03.557379] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:50.776 [2024-07-25 09:23:03.557435] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.776 [2024-07-25 09:23:03.557446] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:50.776 [2024-07-25 09:23:03.557502] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.776 [2024-07-25 09:23:03.557516] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:51.035 #32 NEW cov: 12200 ft: 14493 corp: 13/283b lim: 40 exec/s: 0 rss: 72Mb L: 37/40 MS: 1 InsertRepeatedBytes- 00:07:51.035 [2024-07-25 09:23:03.606998] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:09090000 cdw11:00000030 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.035 [2024-07-25 09:23:03.607021] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:51.035 #33 NEW cov: 12200 ft: 14579 corp: 14/292b lim: 40 exec/s: 0 rss: 72Mb L: 9/40 MS: 1 CrossOver- 00:07:51.035 [2024-07-25 09:23:03.647087] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:09010000 cdw11:00ffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.035 [2024-07-25 09:23:03.647110] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:51.035 #34 NEW cov: 12200 ft: 14627 corp: 15/301b lim: 40 exec/s: 0 rss: 72Mb L: 9/40 MS: 1 CMP- DE: "\001\000\000\000"- 00:07:51.035 [2024-07-25 09:23:03.687229] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:090d0000 cdw11:00ffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.035 [2024-07-25 09:23:03.687251] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:51.035 #35 NEW cov: 12200 ft: 14668 corp: 16/310b lim: 40 exec/s: 0 rss: 72Mb L: 9/40 MS: 1 ShuffleBytes- 00:07:51.035 [2024-07-25 09:23:03.727318] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0909007e cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.035 [2024-07-25 09:23:03.727340] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:51.035 #36 NEW cov: 12200 ft: 14692 corp: 17/322b lim: 40 exec/s: 0 rss: 72Mb L: 12/40 MS: 1 InsertByte- 00:07:51.035 [2024-07-25 09:23:03.777525] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:09090000 cdw11:00000009 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.035 [2024-07-25 09:23:03.777547] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:51.035 NEW_FUNC[1/1]: 0x1a8a050 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:07:51.035 #37 NEW cov: 12223 ft: 14720 corp: 18/334b lim: 40 exec/s: 0 rss: 72Mb L: 12/40 MS: 1 InsertByte- 00:07:51.035 [2024-07-25 09:23:03.817615] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:09090000 cdw11:00fbfff6 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.035 [2024-07-25 09:23:03.817637] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:51.035 #38 NEW cov: 12223 ft: 14745 corp: 19/345b lim: 40 exec/s: 0 rss: 72Mb L: 11/40 MS: 1 ChangeBinInt- 00:07:51.293 [2024-07-25 09:23:03.857725] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:09090000 cdw11:00090000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.293 [2024-07-25 09:23:03.857747] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:51.293 #39 NEW cov: 12223 ft: 14765 corp: 20/356b lim: 40 exec/s: 0 rss: 72Mb L: 11/40 MS: 1 PersAutoDict- DE: "\011\000\000\000"- 00:07:51.293 [2024-07-25 09:23:03.898304] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ff40ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.293 [2024-07-25 09:23:03.898326] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:51.293 [2024-07-25 09:23:03.898384] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.293 [2024-07-25 09:23:03.898398] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:51.293 [2024-07-25 09:23:03.898457] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.293 [2024-07-25 09:23:03.898468] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:51.293 [2024-07-25 09:23:03.898524] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.293 [2024-07-25 09:23:03.898535] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:51.293 #40 NEW cov: 12223 ft: 14795 corp: 21/391b lim: 40 exec/s: 40 rss: 72Mb L: 35/40 MS: 1 ChangeByte- 00:07:51.293 [2024-07-25 09:23:03.938124] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:09090000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.293 [2024-07-25 09:23:03.938146] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:51.293 [2024-07-25 09:23:03.938220] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:b2b2b2b2 cdw11:b2b2b2b2 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.293 [2024-07-25 09:23:03.938233] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:51.293 #41 NEW cov: 12223 ft: 15022 corp: 22/410b lim: 40 exec/s: 41 rss: 72Mb L: 19/40 MS: 1 InsertRepeatedBytes- 00:07:51.293 [2024-07-25 09:23:03.978059] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:090d0009 cdw11:000000ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.293 [2024-07-25 09:23:03.978086] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:51.293 #42 NEW cov: 12223 ft: 15104 corp: 23/424b lim: 40 exec/s: 42 rss: 72Mb L: 14/40 MS: 1 CrossOver- 00:07:51.293 [2024-07-25 09:23:04.028233] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:09090200 cdw11:00000030 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.293 [2024-07-25 09:23:04.028255] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:51.293 #43 NEW cov: 12223 ft: 15114 corp: 24/433b lim: 40 exec/s: 43 rss: 72Mb L: 9/40 MS: 1 ChangeBit- 00:07:51.293 [2024-07-25 09:23:04.078958] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.293 [2024-07-25 09:23:04.078980] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:51.293 [2024-07-25 09:23:04.079038] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.293 [2024-07-25 09:23:04.079049] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:51.293 [2024-07-25 09:23:04.079112] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.293 [2024-07-25 09:23:04.079124] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:51.293 [2024-07-25 09:23:04.079180] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ffffe0e0 cdw11:e0ffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.293 [2024-07-25 09:23:04.079191] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:51.294 [2024-07-25 09:23:04.079249] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:8 nsid:0 cdw10:ff280000 cdw11:000fcb0a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.294 [2024-07-25 09:23:04.079260] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:51.552 #44 NEW cov: 12223 ft: 15137 corp: 25/473b lim: 40 exec/s: 44 rss: 72Mb L: 40/40 MS: 1 ChangeBinInt- 00:07:51.552 [2024-07-25 09:23:04.128498] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:09110000 cdw11:00000030 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.552 [2024-07-25 09:23:04.128520] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:51.552 #45 NEW cov: 12223 ft: 15142 corp: 26/482b lim: 40 exec/s: 45 rss: 72Mb L: 9/40 MS: 1 ChangeBinInt- 00:07:51.552 [2024-07-25 09:23:04.168607] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:09090000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.552 [2024-07-25 09:23:04.168629] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:51.552 #46 NEW cov: 12223 ft: 15155 corp: 27/493b lim: 40 exec/s: 46 rss: 72Mb L: 11/40 MS: 1 ChangeBinInt- 00:07:51.552 [2024-07-25 09:23:04.208733] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:090d0009 cdw11:000000ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.552 [2024-07-25 09:23:04.208756] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:51.552 #47 NEW cov: 12223 ft: 15180 corp: 28/507b lim: 40 exec/s: 47 rss: 72Mb L: 14/40 MS: 1 CopyPart- 00:07:51.552 [2024-07-25 09:23:04.258897] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:09095b00 cdw11:00090000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.552 [2024-07-25 09:23:04.258919] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:51.552 #48 NEW cov: 12223 ft: 15181 corp: 29/518b lim: 40 exec/s: 48 rss: 73Mb L: 11/40 MS: 1 ChangeByte- 00:07:51.552 [2024-07-25 09:23:04.309000] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:090d0000 cdw11:09000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.552 [2024-07-25 09:23:04.309023] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:51.552 #49 NEW cov: 12223 ft: 15216 corp: 30/532b lim: 40 exec/s: 49 rss: 73Mb L: 14/40 MS: 1 CrossOver- 00:07:51.552 [2024-07-25 09:23:04.359535] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:090d00c8 cdw11:c8c8c8c8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.552 [2024-07-25 09:23:04.359558] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:51.552 [2024-07-25 09:23:04.359615] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:c8c8c8c8 cdw11:c8c8c8c8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.552 [2024-07-25 09:23:04.359627] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:51.552 [2024-07-25 09:23:04.359685] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:c8c8c8c8 cdw11:c8c80000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.552 [2024-07-25 09:23:04.359697] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:51.810 #50 NEW cov: 12223 ft: 15404 corp: 31/560b lim: 40 exec/s: 50 rss: 73Mb L: 28/40 MS: 1 InsertRepeatedBytes- 00:07:51.810 [2024-07-25 09:23:04.399290] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:09090000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.810 [2024-07-25 09:23:04.399317] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:51.810 #53 NEW cov: 12223 ft: 15429 corp: 32/570b lim: 40 exec/s: 53 rss: 73Mb L: 10/40 MS: 3 EraseBytes-EraseBytes-CopyPart- 00:07:51.810 [2024-07-25 09:23:04.439403] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:09010000 cdw11:00ffff7f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.810 [2024-07-25 09:23:04.439425] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:51.810 #54 NEW cov: 12223 ft: 15431 corp: 33/579b lim: 40 exec/s: 54 rss: 73Mb L: 9/40 MS: 1 ChangeBit- 00:07:51.810 [2024-07-25 09:23:04.489538] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:090d0000 cdw11:09000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.810 [2024-07-25 09:23:04.489559] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:51.810 #55 NEW cov: 12223 ft: 15437 corp: 34/593b lim: 40 exec/s: 55 rss: 73Mb L: 14/40 MS: 1 ChangeBinInt- 00:07:51.810 [2024-07-25 09:23:04.539679] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:09090000 cdw11:00001800 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.810 [2024-07-25 09:23:04.539701] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:51.810 [2024-07-25 09:23:04.579779] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:09090000 cdw11:00001800 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.810 [2024-07-25 09:23:04.579801] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:51.810 #57 NEW cov: 12223 ft: 15442 corp: 35/605b lim: 40 exec/s: 57 rss: 73Mb L: 12/40 MS: 2 InsertByte-ChangeBit- 00:07:52.069 [2024-07-25 09:23:04.619857] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:09090009 cdw11:09000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.069 [2024-07-25 09:23:04.619879] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:52.069 #58 NEW cov: 12223 ft: 15492 corp: 36/617b lim: 40 exec/s: 58 rss: 73Mb L: 12/40 MS: 1 CrossOver- 00:07:52.069 [2024-07-25 09:23:04.670407] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.069 [2024-07-25 09:23:04.670431] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:52.069 [2024-07-25 09:23:04.670492] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ff090000 cdw11:00ffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.069 [2024-07-25 09:23:04.670503] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:52.069 [2024-07-25 09:23:04.670559] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.069 [2024-07-25 09:23:04.670570] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:52.069 [2024-07-25 09:23:04.670628] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:e0e0e0ff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.069 [2024-07-25 09:23:04.670640] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:52.069 #59 NEW cov: 12223 ft: 15509 corp: 37/656b lim: 40 exec/s: 59 rss: 73Mb L: 39/40 MS: 1 CopyPart- 00:07:52.069 [2024-07-25 09:23:04.710402] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:090dc800 cdw11:c8c8c8c8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.069 [2024-07-25 09:23:04.710428] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:52.069 [2024-07-25 09:23:04.710488] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:c8c8c8c8 cdw11:c8c8c8c8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.069 [2024-07-25 09:23:04.710501] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:52.069 [2024-07-25 09:23:04.710561] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:c8c8c8c8 cdw11:c8c80000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.069 [2024-07-25 09:23:04.710573] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:52.069 #60 NEW cov: 12223 ft: 15526 corp: 38/684b lim: 40 exec/s: 60 rss: 74Mb L: 28/40 MS: 1 ShuffleBytes- 00:07:52.069 [2024-07-25 09:23:04.760547] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:090d0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.069 [2024-07-25 09:23:04.760569] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:52.069 [2024-07-25 09:23:04.760646] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.069 [2024-07-25 09:23:04.760658] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:52.069 [2024-07-25 09:23:04.760715] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:0000ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.069 [2024-07-25 09:23:04.760726] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:52.069 #61 NEW cov: 12223 ft: 15529 corp: 39/708b lim: 40 exec/s: 61 rss: 74Mb L: 24/40 MS: 1 EraseBytes- 00:07:52.069 [2024-07-25 09:23:04.800394] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:09090009 cdw11:00000030 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.069 [2024-07-25 09:23:04.800417] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:52.069 #62 NEW cov: 12223 ft: 15555 corp: 40/717b lim: 40 exec/s: 62 rss: 74Mb L: 9/40 MS: 1 CopyPart- 00:07:52.069 [2024-07-25 09:23:04.840465] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:09090009 cdw11:30000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.069 [2024-07-25 09:23:04.840488] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:52.328 #63 NEW cov: 12223 ft: 15567 corp: 41/726b lim: 40 exec/s: 63 rss: 74Mb L: 9/40 MS: 1 ShuffleBytes- 00:07:52.328 [2024-07-25 09:23:04.890637] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:09090000 cdw11:00000009 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.328 [2024-07-25 09:23:04.890660] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:52.328 #64 pulse cov: 12223 ft: 15574 corp: 41/726b lim: 40 exec/s: 32 rss: 74Mb 00:07:52.328 #64 NEW cov: 12223 ft: 15574 corp: 42/736b lim: 40 exec/s: 32 rss: 74Mb L: 10/40 MS: 1 EraseBytes- 00:07:52.328 #64 DONE cov: 12223 ft: 15574 corp: 42/736b lim: 40 exec/s: 32 rss: 74Mb 00:07:52.328 ###### Recommended dictionary. ###### 00:07:52.328 "\011\000\000\000" # Uses: 3 00:07:52.328 "\001\000\000\000" # Uses: 0 00:07:52.328 ###### End of recommended dictionary. ###### 00:07:52.328 Done 64 runs in 2 second(s) 00:07:52.328 09:23:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_13.conf /var/tmp/suppress_nvmf_fuzz 00:07:52.328 09:23:05 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:52.328 09:23:05 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:52.328 09:23:05 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 14 1 0x1 00:07:52.328 09:23:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=14 00:07:52.328 09:23:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:52.328 09:23:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:52.328 09:23:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 00:07:52.328 09:23:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_14.conf 00:07:52.328 09:23:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:52.328 09:23:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:52.328 09:23:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 14 00:07:52.328 09:23:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4414 00:07:52.328 09:23:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 00:07:52.328 09:23:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4414' 00:07:52.328 09:23:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4414"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:52.328 09:23:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:52.328 09:23:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:52.328 09:23:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4414' -c /tmp/fuzz_json_14.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 -Z 14 00:07:52.328 [2024-07-25 09:23:05.056767] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:52.328 [2024-07-25 09:23:05.056844] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid421402 ] 00:07:52.328 EAL: No free 2048 kB hugepages reported on node 1 00:07:52.587 [2024-07-25 09:23:05.222406] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.587 [2024-07-25 09:23:05.286227] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.587 [2024-07-25 09:23:05.344610] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:52.587 [2024-07-25 09:23:05.360831] tcp.c:1007:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4414 *** 00:07:52.587 INFO: Running with entropic power schedule (0xFF, 100). 00:07:52.587 INFO: Seed: 3538262727 00:07:52.587 INFO: Loaded 1 modules (359061 inline 8-bit counters): 359061 [0x29c768c, 0x2a1f121), 00:07:52.587 INFO: Loaded 1 PC tables (359061 PCs): 359061 [0x2a1f128,0x2f99a78), 00:07:52.587 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 00:07:52.587 INFO: A corpus is not provided, starting from an empty corpus 00:07:52.587 #2 INITED exec/s: 0 rss: 64Mb 00:07:52.587 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:52.587 This may also happen if the target rejected all inputs we tried so far 00:07:52.845 [2024-07-25 09:23:05.405669] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:80000089 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.845 [2024-07-25 09:23:05.405702] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:52.845 [2024-07-25 09:23:05.405748] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:80000089 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.845 [2024-07-25 09:23:05.405770] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:52.845 [2024-07-25 09:23:05.405797] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:80000089 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.845 [2024-07-25 09:23:05.405810] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:52.845 [2024-07-25 09:23:05.405837] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:80000089 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.845 [2024-07-25 09:23:05.405850] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:52.845 NEW_FUNC[1/701]: 0x497f50 in fuzz_admin_set_features_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:392 00:07:52.845 NEW_FUNC[2/701]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:52.845 #14 NEW cov: 11990 ft: 11988 corp: 2/34b lim: 35 exec/s: 0 rss: 71Mb L: 33/33 MS: 2 ShuffleBytes-InsertRepeatedBytes- 00:07:52.845 NEW_FUNC[1/2]: 0x4b28e0 in feat_arbitration /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:273 00:07:52.845 NEW_FUNC[2/2]: 0x11f0270 in nvmf_ctrlr_set_features_arbitration /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/ctrlr.c:1603 00:07:52.845 #18 NEW cov: 12160 ft: 13392 corp: 3/43b lim: 35 exec/s: 0 rss: 71Mb L: 9/33 MS: 4 ShuffleBytes-CopyPart-ChangeByte-CMP- DE: "\001\000\000\000\000\000\000\000"- 00:07:53.103 #19 NEW cov: 12166 ft: 13701 corp: 4/52b lim: 35 exec/s: 0 rss: 71Mb L: 9/33 MS: 1 CrossOver- 00:07:53.103 [2024-07-25 09:23:05.726425] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:80000089 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.103 [2024-07-25 09:23:05.726464] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:53.103 [2024-07-25 09:23:05.726496] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:80000089 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.103 [2024-07-25 09:23:05.726511] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:53.103 [2024-07-25 09:23:05.726544] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:80000089 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.103 [2024-07-25 09:23:05.726558] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:53.103 [2024-07-25 09:23:05.726585] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:80000089 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.103 [2024-07-25 09:23:05.726598] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:53.103 #20 NEW cov: 12251 ft: 13994 corp: 5/85b lim: 35 exec/s: 0 rss: 71Mb L: 33/33 MS: 1 ChangeBinInt- 00:07:53.103 #21 NEW cov: 12251 ft: 14029 corp: 6/94b lim: 35 exec/s: 0 rss: 72Mb L: 9/33 MS: 1 ChangeBit- 00:07:53.103 [2024-07-25 09:23:05.886845] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:80000089 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.103 [2024-07-25 09:23:05.886877] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:53.103 [2024-07-25 09:23:05.886908] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:80000089 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.103 [2024-07-25 09:23:05.886922] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:53.103 [2024-07-25 09:23:05.886949] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:80000089 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.103 [2024-07-25 09:23:05.886966] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:53.103 [2024-07-25 09:23:05.886993] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:80000089 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.103 [2024-07-25 09:23:05.887006] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:53.362 #22 NEW cov: 12251 ft: 14114 corp: 7/127b lim: 35 exec/s: 0 rss: 72Mb L: 33/33 MS: 1 ChangeBinInt- 00:07:53.362 [2024-07-25 09:23:05.947029] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:80000089 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.362 [2024-07-25 09:23:05.947059] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:53.362 [2024-07-25 09:23:05.947112] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:80000089 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.362 [2024-07-25 09:23:05.947128] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:53.362 [2024-07-25 09:23:05.947155] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:80000089 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.362 [2024-07-25 09:23:05.947169] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:53.362 [2024-07-25 09:23:05.947195] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:80000089 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.362 [2024-07-25 09:23:05.947209] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:53.362 [2024-07-25 09:23:05.947235] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:80000089 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.362 [2024-07-25 09:23:05.947248] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:53.362 #23 NEW cov: 12251 ft: 14253 corp: 8/162b lim: 35 exec/s: 0 rss: 72Mb L: 35/35 MS: 1 CopyPart- 00:07:53.362 [2024-07-25 09:23:06.027307] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:80000089 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.362 [2024-07-25 09:23:06.027338] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:53.362 [2024-07-25 09:23:06.027370] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:80000089 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.362 [2024-07-25 09:23:06.027384] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:53.362 [2024-07-25 09:23:06.027411] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:80000089 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.362 [2024-07-25 09:23:06.027425] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:53.362 [2024-07-25 09:23:06.027452] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:80000089 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.362 [2024-07-25 09:23:06.027466] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:53.362 [2024-07-25 09:23:06.027493] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:80000089 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.362 [2024-07-25 09:23:06.027507] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:53.362 #24 NEW cov: 12251 ft: 14276 corp: 9/197b lim: 35 exec/s: 0 rss: 72Mb L: 35/35 MS: 1 CMP- DE: "\001\037"- 00:07:53.362 #25 NEW cov: 12251 ft: 14292 corp: 10/207b lim: 35 exec/s: 0 rss: 72Mb L: 10/35 MS: 1 InsertByte- 00:07:53.362 [2024-07-25 09:23:06.147393] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.362 [2024-07-25 09:23:06.147421] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:53.622 #26 NEW cov: 12258 ft: 14574 corp: 11/224b lim: 35 exec/s: 0 rss: 72Mb L: 17/35 MS: 1 PersAutoDict- DE: "\001\000\000\000\000\000\000\000"- 00:07:53.622 [2024-07-25 09:23:06.188361] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:80000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.622 [2024-07-25 09:23:06.188388] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:53.622 #27 NEW cov: 12258 ft: 14677 corp: 12/242b lim: 35 exec/s: 0 rss: 72Mb L: 18/35 MS: 1 InsertByte- 00:07:53.622 [2024-07-25 09:23:06.238487] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:80000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.622 [2024-07-25 09:23:06.238512] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:53.622 #28 NEW cov: 12258 ft: 14695 corp: 13/260b lim: 35 exec/s: 0 rss: 72Mb L: 18/35 MS: 1 ChangeByte- 00:07:53.622 [2024-07-25 09:23:06.288644] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.622 [2024-07-25 09:23:06.288667] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:53.622 NEW_FUNC[1/1]: 0x1a8a050 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:07:53.622 #29 NEW cov: 12281 ft: 14733 corp: 14/279b lim: 35 exec/s: 0 rss: 72Mb L: 19/35 MS: 1 CrossOver- 00:07:53.622 [2024-07-25 09:23:06.348937] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.622 [2024-07-25 09:23:06.348960] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:53.622 [2024-07-25 09:23:06.349016] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.623 [2024-07-25 09:23:06.349027] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:53.623 #30 NEW cov: 12281 ft: 14981 corp: 15/305b lim: 35 exec/s: 0 rss: 72Mb L: 26/35 MS: 1 PersAutoDict- DE: "\001\000\000\000\000\000\000\000"- 00:07:53.623 [2024-07-25 09:23:06.388684] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000060 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.623 [2024-07-25 09:23:06.388706] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:53.623 #31 NEW cov: 12281 ft: 15043 corp: 16/315b lim: 35 exec/s: 31 rss: 72Mb L: 10/35 MS: 1 InsertByte- 00:07:53.623 [2024-07-25 09:23:06.429313] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.623 [2024-07-25 09:23:06.429337] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:53.623 [2024-07-25 09:23:06.429397] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.623 [2024-07-25 09:23:06.429409] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:53.623 [2024-07-25 09:23:06.429467] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.623 [2024-07-25 09:23:06.429479] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:53.623 [2024-07-25 09:23:06.429537] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:80000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.623 [2024-07-25 09:23:06.429551] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:53.883 #32 NEW cov: 12281 ft: 15133 corp: 17/343b lim: 35 exec/s: 32 rss: 72Mb L: 28/35 MS: 1 InsertRepeatedBytes- 00:07:53.883 [2024-07-25 09:23:06.479153] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.883 [2024-07-25 09:23:06.479176] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:53.883 #33 NEW cov: 12281 ft: 15142 corp: 18/360b lim: 35 exec/s: 33 rss: 72Mb L: 17/35 MS: 1 PersAutoDict- DE: "\001\000\000\000\000\000\000\000"- 00:07:53.883 [2024-07-25 09:23:06.519456] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.883 [2024-07-25 09:23:06.519479] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:53.883 [2024-07-25 09:23:06.519534] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.883 [2024-07-25 09:23:06.519545] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:53.883 #34 NEW cov: 12281 ft: 15170 corp: 19/383b lim: 35 exec/s: 34 rss: 72Mb L: 23/35 MS: 1 CrossOver- 00:07:53.883 [2024-07-25 09:23:06.569408] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:80000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.883 [2024-07-25 09:23:06.569432] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:53.883 #35 NEW cov: 12281 ft: 15186 corp: 20/401b lim: 35 exec/s: 35 rss: 72Mb L: 18/35 MS: 1 PersAutoDict- DE: "\001\037"- 00:07:53.883 #37 NEW cov: 12281 ft: 15193 corp: 21/410b lim: 35 exec/s: 37 rss: 72Mb L: 9/35 MS: 2 EraseBytes-CMP- DE: "\001\000\000\000"- 00:07:53.883 [2024-07-25 09:23:06.659823] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:80000089 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.883 [2024-07-25 09:23:06.659846] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:53.883 [2024-07-25 09:23:06.659902] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:80000089 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.883 [2024-07-25 09:23:06.659915] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:53.883 [2024-07-25 09:23:06.659969] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:80000089 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.883 [2024-07-25 09:23:06.659982] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:54.142 #38 NEW cov: 12281 ft: 15328 corp: 22/437b lim: 35 exec/s: 38 rss: 72Mb L: 27/35 MS: 1 EraseBytes- 00:07:54.142 [2024-07-25 09:23:06.710129] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.142 [2024-07-25 09:23:06.710152] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:54.142 [2024-07-25 09:23:06.710209] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.142 [2024-07-25 09:23:06.710221] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:54.142 [2024-07-25 09:23:06.710276] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.142 [2024-07-25 09:23:06.710291] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:54.142 [2024-07-25 09:23:06.710346] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.142 [2024-07-25 09:23:06.710357] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:54.142 #39 NEW cov: 12281 ft: 15352 corp: 23/471b lim: 35 exec/s: 39 rss: 72Mb L: 34/35 MS: 1 CrossOver- 00:07:54.142 [2024-07-25 09:23:06.760140] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.142 [2024-07-25 09:23:06.760162] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:54.142 [2024-07-25 09:23:06.760217] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:80000021 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.142 [2024-07-25 09:23:06.760230] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:54.142 #40 NEW cov: 12281 ft: 15398 corp: 24/495b lim: 35 exec/s: 40 rss: 72Mb L: 24/35 MS: 1 CrossOver- 00:07:54.142 [2024-07-25 09:23:06.800497] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.142 [2024-07-25 09:23:06.800521] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:54.142 [2024-07-25 09:23:06.800577] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.142 [2024-07-25 09:23:06.800589] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:54.142 [2024-07-25 09:23:06.800644] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.142 [2024-07-25 09:23:06.800655] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:54.142 [2024-07-25 09:23:06.800710] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000ad SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.142 [2024-07-25 09:23:06.800723] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:54.142 [2024-07-25 09:23:06.800779] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:80000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.142 [2024-07-25 09:23:06.800792] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:54.142 #41 NEW cov: 12281 ft: 15419 corp: 25/530b lim: 35 exec/s: 41 rss: 72Mb L: 35/35 MS: 1 InsertRepeatedBytes- 00:07:54.142 [2024-07-25 09:23:06.840333] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.142 [2024-07-25 09:23:06.840355] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:54.142 [2024-07-25 09:23:06.840412] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.142 [2024-07-25 09:23:06.840423] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:54.142 #42 NEW cov: 12281 ft: 15433 corp: 26/556b lim: 35 exec/s: 42 rss: 72Mb L: 26/35 MS: 1 ChangeBinInt- 00:07:54.142 [2024-07-25 09:23:06.890482] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:80000089 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.142 [2024-07-25 09:23:06.890506] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:54.142 [2024-07-25 09:23:06.890562] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:80000089 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.142 [2024-07-25 09:23:06.890576] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:54.142 [2024-07-25 09:23:06.890632] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:80000089 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.142 [2024-07-25 09:23:06.890645] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:54.142 #43 NEW cov: 12281 ft: 15449 corp: 27/583b lim: 35 exec/s: 43 rss: 72Mb L: 27/35 MS: 1 ShuffleBytes- 00:07:54.142 [2024-07-25 09:23:06.940893] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:80000089 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.142 [2024-07-25 09:23:06.940917] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:54.142 [2024-07-25 09:23:06.940971] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:80000089 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.142 [2024-07-25 09:23:06.940983] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:54.142 [2024-07-25 09:23:06.941036] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:80000089 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.142 [2024-07-25 09:23:06.941048] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:54.142 [2024-07-25 09:23:06.941103] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:80000030 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.142 [2024-07-25 09:23:06.941116] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:54.142 [2024-07-25 09:23:06.941171] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:80000089 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.142 [2024-07-25 09:23:06.941183] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:54.401 #44 NEW cov: 12281 ft: 15495 corp: 28/618b lim: 35 exec/s: 44 rss: 72Mb L: 35/35 MS: 1 ChangeByte- 00:07:54.401 [2024-07-25 09:23:06.990787] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.401 [2024-07-25 09:23:06.990810] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:54.401 [2024-07-25 09:23:06.990862] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.401 [2024-07-25 09:23:06.990873] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:54.401 #45 NEW cov: 12281 ft: 15527 corp: 29/641b lim: 35 exec/s: 45 rss: 72Mb L: 23/35 MS: 1 CMP- DE: "\001\000\000\000\000\000\000\000"- 00:07:54.401 [2024-07-25 09:23:07.041195] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:80000089 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.401 [2024-07-25 09:23:07.041219] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:54.401 [2024-07-25 09:23:07.041275] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:80000089 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.401 [2024-07-25 09:23:07.041287] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:54.401 [2024-07-25 09:23:07.041341] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:80000089 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.401 [2024-07-25 09:23:07.041356] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:54.401 [2024-07-25 09:23:07.041410] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:80000030 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.401 [2024-07-25 09:23:07.041424] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:54.401 [2024-07-25 09:23:07.041478] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:80000089 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.401 [2024-07-25 09:23:07.041491] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:54.401 #46 NEW cov: 12281 ft: 15591 corp: 30/676b lim: 35 exec/s: 46 rss: 73Mb L: 35/35 MS: 1 ChangeBit- 00:07:54.401 [2024-07-25 09:23:07.090996] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.401 [2024-07-25 09:23:07.091019] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:54.401 [2024-07-25 09:23:07.091079] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.401 [2024-07-25 09:23:07.091090] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:54.401 [2024-07-25 09:23:07.091162] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.401 [2024-07-25 09:23:07.091174] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:54.401 #52 NEW cov: 12281 ft: 15595 corp: 31/702b lim: 35 exec/s: 52 rss: 73Mb L: 26/35 MS: 1 ShuffleBytes- 00:07:54.401 [2024-07-25 09:23:07.141250] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:80000089 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.401 [2024-07-25 09:23:07.141275] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:54.401 [2024-07-25 09:23:07.141333] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:80000089 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.401 [2024-07-25 09:23:07.141346] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:54.401 [2024-07-25 09:23:07.141398] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:80000089 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.401 [2024-07-25 09:23:07.141410] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:54.401 [2024-07-25 09:23:07.141464] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:80000089 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.401 [2024-07-25 09:23:07.141477] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:54.401 #53 NEW cov: 12281 ft: 15608 corp: 32/735b lim: 35 exec/s: 53 rss: 73Mb L: 33/35 MS: 1 ChangeBit- 00:07:54.660 #54 NEW cov: 12281 ft: 15640 corp: 33/745b lim: 35 exec/s: 54 rss: 73Mb L: 10/35 MS: 1 ChangeByte- 00:07:54.660 [2024-07-25 09:23:07.231593] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.660 [2024-07-25 09:23:07.231616] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:54.660 [2024-07-25 09:23:07.231671] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.660 [2024-07-25 09:23:07.231684] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:54.660 [2024-07-25 09:23:07.231738] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.660 [2024-07-25 09:23:07.231748] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:54.660 #55 NEW cov: 12281 ft: 15643 corp: 34/774b lim: 35 exec/s: 55 rss: 73Mb L: 29/35 MS: 1 CopyPart- 00:07:54.660 [2024-07-25 09:23:07.271676] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.660 [2024-07-25 09:23:07.271699] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:54.660 [2024-07-25 09:23:07.271756] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ad SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.660 [2024-07-25 09:23:07.271770] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:54.660 [2024-07-25 09:23:07.271826] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:80000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.660 [2024-07-25 09:23:07.271839] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:54.660 #58 NEW cov: 12281 ft: 15680 corp: 35/804b lim: 35 exec/s: 58 rss: 73Mb L: 30/35 MS: 3 CrossOver-ChangeByte-CrossOver- 00:07:54.660 [2024-07-25 09:23:07.321519] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.660 [2024-07-25 09:23:07.321543] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:54.660 #59 NEW cov: 12281 ft: 15687 corp: 36/821b lim: 35 exec/s: 59 rss: 73Mb L: 17/35 MS: 1 ChangeBit- 00:07:54.660 [2024-07-25 09:23:07.361729] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000c0 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.660 [2024-07-25 09:23:07.361754] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:54.660 [2024-07-25 09:23:07.361811] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000c0 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.660 [2024-07-25 09:23:07.361824] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:54.660 #60 NEW cov: 12281 ft: 15690 corp: 37/847b lim: 35 exec/s: 30 rss: 73Mb L: 26/35 MS: 1 InsertRepeatedBytes- 00:07:54.660 #60 DONE cov: 12281 ft: 15690 corp: 37/847b lim: 35 exec/s: 30 rss: 73Mb 00:07:54.660 ###### Recommended dictionary. ###### 00:07:54.660 "\001\000\000\000\000\000\000\000" # Uses: 3 00:07:54.660 "\001\037" # Uses: 1 00:07:54.660 "\001\000\000\000" # Uses: 0 00:07:54.660 ###### End of recommended dictionary. ###### 00:07:54.660 Done 60 runs in 2 second(s) 00:07:54.919 09:23:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_14.conf /var/tmp/suppress_nvmf_fuzz 00:07:54.919 09:23:07 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:54.919 09:23:07 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:54.919 09:23:07 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 15 1 0x1 00:07:54.919 09:23:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=15 00:07:54.919 09:23:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:54.919 09:23:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:54.919 09:23:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 00:07:54.919 09:23:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_15.conf 00:07:54.919 09:23:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:54.919 09:23:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:54.919 09:23:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 15 00:07:54.919 09:23:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4415 00:07:54.919 09:23:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 00:07:54.919 09:23:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4415' 00:07:54.919 09:23:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4415"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:54.919 09:23:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:54.919 09:23:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:54.919 09:23:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4415' -c /tmp/fuzz_json_15.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 -Z 15 00:07:54.919 [2024-07-25 09:23:07.550668] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:54.919 [2024-07-25 09:23:07.550748] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid421755 ] 00:07:54.919 EAL: No free 2048 kB hugepages reported on node 1 00:07:54.919 [2024-07-25 09:23:07.722719] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.178 [2024-07-25 09:23:07.787936] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.178 [2024-07-25 09:23:07.846407] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:55.178 [2024-07-25 09:23:07.862617] tcp.c:1007:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4415 *** 00:07:55.178 INFO: Running with entropic power schedule (0xFF, 100). 00:07:55.178 INFO: Seed: 1743292123 00:07:55.178 INFO: Loaded 1 modules (359061 inline 8-bit counters): 359061 [0x29c768c, 0x2a1f121), 00:07:55.178 INFO: Loaded 1 PC tables (359061 PCs): 359061 [0x2a1f128,0x2f99a78), 00:07:55.178 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 00:07:55.178 INFO: A corpus is not provided, starting from an empty corpus 00:07:55.178 #2 INITED exec/s: 0 rss: 63Mb 00:07:55.178 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:55.178 This may also happen if the target rejected all inputs we tried so far 00:07:55.178 [2024-07-25 09:23:07.910960] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000026 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.178 [2024-07-25 09:23:07.910986] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:55.438 NEW_FUNC[1/703]: 0x499490 in fuzz_admin_get_features_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:460 00:07:55.438 NEW_FUNC[2/703]: 0x4b4bb0 in feat_temperature_threshold /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:295 00:07:55.438 #4 NEW cov: 12033 ft: 12032 corp: 2/25b lim: 35 exec/s: 0 rss: 71Mb L: 24/24 MS: 2 InsertByte-InsertRepeatedBytes- 00:07:55.438 [2024-07-25 09:23:08.061302] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000026 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.438 [2024-07-25 09:23:08.061334] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:55.438 #5 NEW cov: 12146 ft: 12501 corp: 3/49b lim: 35 exec/s: 0 rss: 71Mb L: 24/24 MS: 1 CopyPart- 00:07:55.438 [2024-07-25 09:23:08.121215] ctrlr.c:1648:temp_threshold_opts_valid: *ERROR*: Invalid TMPSEL 9 00:07:55.438 [2024-07-25 09:23:08.121462] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000026 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.438 [2024-07-25 09:23:08.121486] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:55.438 [2024-07-25 09:23:08.121605] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:6 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.438 [2024-07-25 09:23:08.121618] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:55.438 #6 NEW cov: 12163 ft: 12967 corp: 4/75b lim: 35 exec/s: 0 rss: 71Mb L: 26/26 MS: 1 CopyPart- 00:07:55.438 [2024-07-25 09:23:08.161502] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000026 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.438 [2024-07-25 09:23:08.161525] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:55.438 [2024-07-25 09:23:08.161643] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000024 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.438 [2024-07-25 09:23:08.161657] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:55.438 #7 NEW cov: 12248 ft: 13306 corp: 5/99b lim: 35 exec/s: 0 rss: 71Mb L: 24/26 MS: 1 ChangeBit- 00:07:55.438 [2024-07-25 09:23:08.201357] ctrlr.c:1648:temp_threshold_opts_valid: *ERROR*: Invalid TMPSEL 13 00:07:55.438 [2024-07-25 09:23:08.201822] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000026 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.438 [2024-07-25 09:23:08.201844] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:55.438 [2024-07-25 09:23:08.201898] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000304 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.438 [2024-07-25 09:23:08.201910] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:55.438 #8 NEW cov: 12248 ft: 13702 corp: 6/128b lim: 35 exec/s: 0 rss: 72Mb L: 29/29 MS: 1 InsertRepeatedBytes- 00:07:55.696 [2024-07-25 09:23:08.251534] ctrlr.c:1648:temp_threshold_opts_valid: *ERROR*: Invalid TMPSEL 9 00:07:55.696 [2024-07-25 09:23:08.251768] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000026 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.696 [2024-07-25 09:23:08.251791] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:55.696 [2024-07-25 09:23:08.251912] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:6 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.696 [2024-07-25 09:23:08.251924] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:55.697 #9 NEW cov: 12248 ft: 13800 corp: 7/154b lim: 35 exec/s: 0 rss: 72Mb L: 26/29 MS: 1 CrossOver- 00:07:55.697 [2024-07-25 09:23:08.291847] ctrlr.c:1659:temp_threshold_opts_valid: *ERROR*: Invalid THSEL 2 00:07:55.697 [2024-07-25 09:23:08.292089] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000026 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.697 [2024-07-25 09:23:08.292112] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:55.697 [2024-07-25 09:23:08.292298] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:7 cdw10:00000104 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.697 [2024-07-25 09:23:08.292310] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:55.697 NEW_FUNC[1/1]: 0x4b9410 in feat_write_atomicity /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:340 00:07:55.697 #10 NEW cov: 12264 ft: 14011 corp: 8/186b lim: 35 exec/s: 0 rss: 72Mb L: 32/32 MS: 1 CopyPart- 00:07:55.697 [2024-07-25 09:23:08.341829] ctrlr.c:1648:temp_threshold_opts_valid: *ERROR*: Invalid TMPSEL 9 00:07:55.697 [2024-07-25 09:23:08.342188] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000026 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.697 [2024-07-25 09:23:08.342211] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:55.697 [2024-07-25 09:23:08.342331] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:6 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.697 [2024-07-25 09:23:08.342344] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:55.697 [2024-07-25 09:23:08.342402] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:0000012d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.697 [2024-07-25 09:23:08.342413] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:55.697 #11 NEW cov: 12264 ft: 14148 corp: 9/217b lim: 35 exec/s: 0 rss: 72Mb L: 31/32 MS: 1 InsertRepeatedBytes- 00:07:55.697 [2024-07-25 09:23:08.382027] ctrlr.c:1648:temp_threshold_opts_valid: *ERROR*: Invalid TMPSEL 9 00:07:55.697 [2024-07-25 09:23:08.382284] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000026 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.697 [2024-07-25 09:23:08.382307] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:55.697 [2024-07-25 09:23:08.382483] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:7 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.697 [2024-07-25 09:23:08.382496] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:55.697 #12 NEW cov: 12264 ft: 14191 corp: 10/250b lim: 35 exec/s: 0 rss: 72Mb L: 33/33 MS: 1 CopyPart- 00:07:55.697 [2024-07-25 09:23:08.422154] ctrlr.c:1659:temp_threshold_opts_valid: *ERROR*: Invalid THSEL 2 00:07:55.697 [2024-07-25 09:23:08.422389] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000026 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.697 [2024-07-25 09:23:08.422411] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:55.697 [2024-07-25 09:23:08.422470] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000084 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.697 [2024-07-25 09:23:08.422482] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:55.697 [2024-07-25 09:23:08.422601] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:7 cdw10:00000104 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.697 [2024-07-25 09:23:08.422614] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:55.697 #13 NEW cov: 12264 ft: 14241 corp: 11/282b lim: 35 exec/s: 0 rss: 72Mb L: 32/33 MS: 1 ChangeBit- 00:07:55.697 [2024-07-25 09:23:08.472383] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000026 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.697 [2024-07-25 09:23:08.472407] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:55.697 [2024-07-25 09:23:08.472525] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000025 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.697 [2024-07-25 09:23:08.472537] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:55.955 #14 NEW cov: 12264 ft: 14260 corp: 12/306b lim: 35 exec/s: 0 rss: 72Mb L: 24/33 MS: 1 ChangeBit- 00:07:55.955 [2024-07-25 09:23:08.522209] ctrlr.c:1648:temp_threshold_opts_valid: *ERROR*: Invalid TMPSEL 13 00:07:55.955 [2024-07-25 09:23:08.522668] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000026 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.955 [2024-07-25 09:23:08.522691] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:55.955 [2024-07-25 09:23:08.522747] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000304 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.955 [2024-07-25 09:23:08.522759] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:55.955 #15 NEW cov: 12264 ft: 14276 corp: 13/335b lim: 35 exec/s: 0 rss: 72Mb L: 29/33 MS: 1 ShuffleBytes- 00:07:55.956 [2024-07-25 09:23:08.572555] ctrlr.c:1659:temp_threshold_opts_valid: *ERROR*: Invalid THSEL 2 00:07:55.956 [2024-07-25 09:23:08.572793] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000026 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.956 [2024-07-25 09:23:08.572817] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:55.956 [2024-07-25 09:23:08.572999] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:7 cdw10:00000104 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.956 [2024-07-25 09:23:08.573011] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:55.956 #16 NEW cov: 12264 ft: 14294 corp: 14/367b lim: 35 exec/s: 0 rss: 72Mb L: 32/33 MS: 1 ShuffleBytes- 00:07:55.956 [2024-07-25 09:23:08.622768] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000036 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.956 [2024-07-25 09:23:08.622790] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:55.956 #17 NEW cov: 12264 ft: 14326 corp: 15/391b lim: 35 exec/s: 0 rss: 72Mb L: 24/33 MS: 1 ChangeBit- 00:07:55.956 [2024-07-25 09:23:08.672894] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000026 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.956 [2024-07-25 09:23:08.672917] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:55.956 #18 NEW cov: 12264 ft: 14410 corp: 16/415b lim: 35 exec/s: 0 rss: 72Mb L: 24/33 MS: 1 CMP- DE: "\001\000\000\000"- 00:07:55.956 [2024-07-25 09:23:08.712828] ctrlr.c:1648:temp_threshold_opts_valid: *ERROR*: Invalid TMPSEL 9 00:07:55.956 [2024-07-25 09:23:08.713194] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000026 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.956 [2024-07-25 09:23:08.713217] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:55.956 [2024-07-25 09:23:08.713339] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:6 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.956 [2024-07-25 09:23:08.713352] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:55.956 [2024-07-25 09:23:08.713412] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:0000012d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.956 [2024-07-25 09:23:08.713425] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:55.956 #19 NEW cov: 12264 ft: 14425 corp: 17/446b lim: 35 exec/s: 0 rss: 72Mb L: 31/33 MS: 1 ChangeBit- 00:07:55.956 [2024-07-25 09:23:08.763166] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000026 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.956 [2024-07-25 09:23:08.763190] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:56.214 #20 NEW cov: 12264 ft: 14437 corp: 18/471b lim: 35 exec/s: 0 rss: 72Mb L: 25/33 MS: 1 CrossOver- 00:07:56.214 [2024-07-25 09:23:08.803291] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000026 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.214 [2024-07-25 09:23:08.803314] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:56.214 [2024-07-25 09:23:08.803431] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.214 [2024-07-25 09:23:08.803443] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:56.214 NEW_FUNC[1/1]: 0x1a8a050 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:07:56.214 #21 NEW cov: 12287 ft: 14475 corp: 19/497b lim: 35 exec/s: 0 rss: 72Mb L: 26/33 MS: 1 CMP- DE: "@\000\000\000\000\000\000\000"- 00:07:56.214 [2024-07-25 09:23:08.843100] ctrlr.c:1648:temp_threshold_opts_valid: *ERROR*: Invalid TMPSEL 13 00:07:56.214 [2024-07-25 09:23:08.843579] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000026 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.214 [2024-07-25 09:23:08.843604] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:56.214 [2024-07-25 09:23:08.843663] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000304 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.214 [2024-07-25 09:23:08.843675] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:56.214 #22 NEW cov: 12287 ft: 14485 corp: 20/526b lim: 35 exec/s: 0 rss: 72Mb L: 29/33 MS: 1 CMP- DE: "\036\000\000\000"- 00:07:56.214 [2024-07-25 09:23:08.883515] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000026 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.214 [2024-07-25 09:23:08.883539] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:56.214 #28 NEW cov: 12287 ft: 14527 corp: 21/552b lim: 35 exec/s: 28 rss: 72Mb L: 26/33 MS: 1 InsertByte- 00:07:56.214 [2024-07-25 09:23:08.933788] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000026 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.214 [2024-07-25 09:23:08.933812] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:56.214 #29 NEW cov: 12287 ft: 14692 corp: 22/584b lim: 35 exec/s: 29 rss: 72Mb L: 32/33 MS: 1 ChangeBinInt- 00:07:56.214 [2024-07-25 09:23:08.973718] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000026 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.214 [2024-07-25 09:23:08.973741] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:56.214 #30 NEW cov: 12287 ft: 14709 corp: 23/611b lim: 35 exec/s: 30 rss: 72Mb L: 27/33 MS: 1 InsertRepeatedBytes- 00:07:56.214 [2024-07-25 09:23:09.013551] ctrlr.c:1648:temp_threshold_opts_valid: *ERROR*: Invalid TMPSEL 13 00:07:56.214 [2024-07-25 09:23:09.013902] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000026 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.214 [2024-07-25 09:23:09.013926] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:56.214 [2024-07-25 09:23:09.013984] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000304 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.214 [2024-07-25 09:23:09.013995] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:56.472 #31 NEW cov: 12287 ft: 14769 corp: 24/637b lim: 35 exec/s: 31 rss: 73Mb L: 26/33 MS: 1 EraseBytes- 00:07:56.472 [2024-07-25 09:23:09.064022] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000026 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.472 [2024-07-25 09:23:09.064045] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:56.472 [2024-07-25 09:23:09.064108] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000000de SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.472 [2024-07-25 09:23:09.064120] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:56.472 #32 NEW cov: 12287 ft: 14788 corp: 25/662b lim: 35 exec/s: 32 rss: 73Mb L: 25/33 MS: 1 InsertByte- 00:07:56.472 [2024-07-25 09:23:09.114123] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000026 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.472 [2024-07-25 09:23:09.114145] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:56.472 [2024-07-25 09:23:09.114267] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000024 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.472 [2024-07-25 09:23:09.114287] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:56.472 #33 NEW cov: 12287 ft: 14793 corp: 26/686b lim: 35 exec/s: 33 rss: 73Mb L: 24/33 MS: 1 ChangeBit- 00:07:56.472 [2024-07-25 09:23:09.153921] ctrlr.c:1648:temp_threshold_opts_valid: *ERROR*: Invalid TMPSEL 13 00:07:56.472 [2024-07-25 09:23:09.154284] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000026 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.472 [2024-07-25 09:23:09.154307] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:56.472 [2024-07-25 09:23:09.154364] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000304 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.472 [2024-07-25 09:23:09.154376] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:56.472 [2024-07-25 09:23:09.154435] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000000ea SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.472 [2024-07-25 09:23:09.154447] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:56.472 #34 NEW cov: 12287 ft: 14808 corp: 27/712b lim: 35 exec/s: 34 rss: 73Mb L: 26/33 MS: 1 ChangeByte- 00:07:56.472 NEW_FUNC[1/1]: 0x4b6a10 in feat_volatile_write_cache /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:312 00:07:56.472 #35 NEW cov: 12301 ft: 15257 corp: 28/744b lim: 35 exec/s: 35 rss: 73Mb L: 32/33 MS: 1 ChangeBit- 00:07:56.472 [2024-07-25 09:23:09.254656] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000026 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.472 [2024-07-25 09:23:09.254678] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:56.472 [2024-07-25 09:23:09.254863] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000024 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.472 [2024-07-25 09:23:09.254875] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:56.730 #36 NEW cov: 12301 ft: 15263 corp: 29/775b lim: 35 exec/s: 36 rss: 73Mb L: 31/33 MS: 1 CopyPart- 00:07:56.730 [2024-07-25 09:23:09.304824] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000026 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.730 [2024-07-25 09:23:09.304847] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:56.730 [2024-07-25 09:23:09.305030] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000024 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.730 [2024-07-25 09:23:09.305042] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:56.730 #37 NEW cov: 12301 ft: 15267 corp: 30/806b lim: 35 exec/s: 37 rss: 73Mb L: 31/33 MS: 1 CopyPart- 00:07:56.730 [2024-07-25 09:23:09.354969] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000026 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.730 [2024-07-25 09:23:09.354992] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:56.730 [2024-07-25 09:23:09.355175] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000024 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.730 [2024-07-25 09:23:09.355189] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:56.730 #38 NEW cov: 12301 ft: 15283 corp: 31/837b lim: 35 exec/s: 38 rss: 73Mb L: 31/33 MS: 1 CMP- DE: "\015\000"- 00:07:56.730 [2024-07-25 09:23:09.394721] ctrlr.c:1648:temp_threshold_opts_valid: *ERROR*: Invalid TMPSEL 9 00:07:56.730 [2024-07-25 09:23:09.394955] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000026 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.730 [2024-07-25 09:23:09.394978] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:56.730 [2024-07-25 09:23:09.395098] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:6 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.730 [2024-07-25 09:23:09.395112] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:56.730 #39 NEW cov: 12301 ft: 15287 corp: 32/863b lim: 35 exec/s: 39 rss: 73Mb L: 26/33 MS: 1 ChangeBit- 00:07:56.730 [2024-07-25 09:23:09.434645] ctrlr.c:1659:temp_threshold_opts_valid: *ERROR*: Invalid THSEL 2 00:07:56.730 [2024-07-25 09:23:09.435253] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:4 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.730 [2024-07-25 09:23:09.435275] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:56.730 #40 NEW cov: 12301 ft: 15363 corp: 33/895b lim: 35 exec/s: 40 rss: 73Mb L: 32/33 MS: 1 ShuffleBytes- 00:07:56.730 [2024-07-25 09:23:09.474912] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000026 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.730 [2024-07-25 09:23:09.474936] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:56.730 #44 NEW cov: 12301 ft: 15675 corp: 34/902b lim: 35 exec/s: 44 rss: 73Mb L: 7/33 MS: 4 CrossOver-ChangeByte-EraseBytes-CrossOver- 00:07:56.730 [2024-07-25 09:23:09.515185] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000026 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.730 [2024-07-25 09:23:09.515206] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:56.988 #45 NEW cov: 12301 ft: 15766 corp: 35/918b lim: 35 exec/s: 45 rss: 73Mb L: 16/33 MS: 1 EraseBytes- 00:07:56.988 [2024-07-25 09:23:09.565483] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000026 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.988 [2024-07-25 09:23:09.565506] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:56.988 #46 NEW cov: 12301 ft: 15806 corp: 36/945b lim: 35 exec/s: 46 rss: 74Mb L: 27/33 MS: 1 InsertByte- 00:07:56.988 [2024-07-25 09:23:09.615600] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000026 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.988 [2024-07-25 09:23:09.615622] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:56.988 [2024-07-25 09:23:09.615682] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000000de SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.988 [2024-07-25 09:23:09.615694] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:56.988 #47 NEW cov: 12301 ft: 15816 corp: 37/970b lim: 35 exec/s: 47 rss: 74Mb L: 25/33 MS: 1 ChangeByte- 00:07:56.988 [2024-07-25 09:23:09.665399] ctrlr.c:1648:temp_threshold_opts_valid: *ERROR*: Invalid TMPSEL 13 00:07:56.988 [2024-07-25 09:23:09.665634] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000026 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.988 [2024-07-25 09:23:09.665657] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:56.988 [2024-07-25 09:23:09.665715] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000304 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.988 [2024-07-25 09:23:09.665727] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:56.988 #48 NEW cov: 12301 ft: 15877 corp: 38/986b lim: 35 exec/s: 48 rss: 74Mb L: 16/33 MS: 1 CrossOver- 00:07:56.988 #49 NEW cov: 12301 ft: 15895 corp: 39/1018b lim: 35 exec/s: 49 rss: 74Mb L: 32/33 MS: 1 CopyPart- 00:07:56.988 [2024-07-25 09:23:09.765720] ctrlr.c:1648:temp_threshold_opts_valid: *ERROR*: Invalid TMPSEL 13 00:07:56.988 [2024-07-25 09:23:09.766074] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000026 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.988 [2024-07-25 09:23:09.766096] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:56.988 [2024-07-25 09:23:09.766154] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000304 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.988 [2024-07-25 09:23:09.766166] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:56.988 #50 NEW cov: 12301 ft: 15902 corp: 40/1042b lim: 35 exec/s: 50 rss: 74Mb L: 24/33 MS: 1 EraseBytes- 00:07:57.246 [2024-07-25 09:23:09.806002] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000026 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.246 [2024-07-25 09:23:09.806025] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:57.246 #51 NEW cov: 12301 ft: 15968 corp: 41/1056b lim: 35 exec/s: 51 rss: 74Mb L: 14/33 MS: 1 EraseBytes- 00:07:57.246 [2024-07-25 09:23:09.846441] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000026 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.246 [2024-07-25 09:23:09.846464] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:57.246 [2024-07-25 09:23:09.846594] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007fe SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.246 [2024-07-25 09:23:09.846607] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:57.246 [2024-07-25 09:23:09.846665] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000025 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.246 [2024-07-25 09:23:09.846677] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:57.246 #52 NEW cov: 12301 ft: 15986 corp: 42/1087b lim: 35 exec/s: 52 rss: 74Mb L: 31/33 MS: 1 InsertRepeatedBytes- 00:07:57.246 [2024-07-25 09:23:09.886205] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000326 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.246 [2024-07-25 09:23:09.886227] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:57.246 #53 NEW cov: 12301 ft: 15997 corp: 43/1105b lim: 35 exec/s: 26 rss: 74Mb L: 18/33 MS: 1 EraseBytes- 00:07:57.246 #53 DONE cov: 12301 ft: 15997 corp: 43/1105b lim: 35 exec/s: 26 rss: 74Mb 00:07:57.246 ###### Recommended dictionary. ###### 00:07:57.246 "\001\000\000\000" # Uses: 0 00:07:57.246 "@\000\000\000\000\000\000\000" # Uses: 0 00:07:57.246 "\036\000\000\000" # Uses: 0 00:07:57.246 "\015\000" # Uses: 0 00:07:57.246 ###### End of recommended dictionary. ###### 00:07:57.246 Done 53 runs in 2 second(s) 00:07:57.246 09:23:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_15.conf /var/tmp/suppress_nvmf_fuzz 00:07:57.246 09:23:10 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:57.246 09:23:10 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:57.246 09:23:10 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 16 1 0x1 00:07:57.246 09:23:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=16 00:07:57.246 09:23:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:57.246 09:23:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:57.246 09:23:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 00:07:57.246 09:23:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_16.conf 00:07:57.246 09:23:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:57.246 09:23:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:57.246 09:23:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 16 00:07:57.246 09:23:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4416 00:07:57.246 09:23:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 00:07:57.246 09:23:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4416' 00:07:57.246 09:23:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4416"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:57.246 09:23:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:57.246 09:23:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:57.246 09:23:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4416' -c /tmp/fuzz_json_16.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 -Z 16 00:07:57.506 [2024-07-25 09:23:10.057013] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:57.506 [2024-07-25 09:23:10.057074] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid422080 ] 00:07:57.506 EAL: No free 2048 kB hugepages reported on node 1 00:07:57.506 [2024-07-25 09:23:10.228781] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:57.506 [2024-07-25 09:23:10.294045] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.764 [2024-07-25 09:23:10.352942] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:57.764 [2024-07-25 09:23:10.369182] tcp.c:1007:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4416 *** 00:07:57.764 INFO: Running with entropic power schedule (0xFF, 100). 00:07:57.764 INFO: Seed: 4249288332 00:07:57.764 INFO: Loaded 1 modules (359061 inline 8-bit counters): 359061 [0x29c768c, 0x2a1f121), 00:07:57.764 INFO: Loaded 1 PC tables (359061 PCs): 359061 [0x2a1f128,0x2f99a78), 00:07:57.764 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 00:07:57.764 INFO: A corpus is not provided, starting from an empty corpus 00:07:57.764 #2 INITED exec/s: 0 rss: 63Mb 00:07:57.764 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:57.764 This may also happen if the target rejected all inputs we tried so far 00:07:57.764 [2024-07-25 09:23:10.417368] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:16204198712692498656 len:57569 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.764 [2024-07-25 09:23:10.417397] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:57.764 NEW_FUNC[1/701]: 0x49a940 in fuzz_nvm_read_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:519 00:07:57.764 NEW_FUNC[2/701]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:57.764 #6 NEW cov: 12082 ft: 12081 corp: 2/30b lim: 105 exec/s: 0 rss: 71Mb L: 29/29 MS: 4 ChangeByte-ChangeByte-InsertByte-InsertRepeatedBytes- 00:07:57.764 [2024-07-25 09:23:10.567907] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:16204198712692498656 len:57569 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.764 [2024-07-25 09:23:10.567940] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:57.764 [2024-07-25 09:23:10.567988] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:6148914693577695317 len:21846 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.764 [2024-07-25 09:23:10.568003] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:58.022 #12 NEW cov: 12195 ft: 12969 corp: 3/74b lim: 105 exec/s: 0 rss: 71Mb L: 44/44 MS: 1 InsertRepeatedBytes- 00:07:58.022 [2024-07-25 09:23:10.627982] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:16204198712692498656 len:57569 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.022 [2024-07-25 09:23:10.628007] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:58.022 [2024-07-25 09:23:10.628048] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:6148914693577695317 len:22358 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.022 [2024-07-25 09:23:10.628083] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:58.022 #13 NEW cov: 12201 ft: 13134 corp: 4/118b lim: 105 exec/s: 0 rss: 72Mb L: 44/44 MS: 1 ChangeBit- 00:07:58.022 [2024-07-25 09:23:10.677997] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:16204198712692498656 len:57569 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.022 [2024-07-25 09:23:10.678024] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:58.022 #14 NEW cov: 12286 ft: 13513 corp: 5/147b lim: 105 exec/s: 0 rss: 72Mb L: 29/44 MS: 1 ShuffleBytes- 00:07:58.022 [2024-07-25 09:23:10.718267] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:16204198712692498656 len:57569 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.022 [2024-07-25 09:23:10.718291] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:58.022 [2024-07-25 09:23:10.718330] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:6148914693577695317 len:22358 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.022 [2024-07-25 09:23:10.718344] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:58.022 #15 NEW cov: 12286 ft: 13650 corp: 6/191b lim: 105 exec/s: 0 rss: 72Mb L: 44/44 MS: 1 ChangeByte- 00:07:58.022 [2024-07-25 09:23:10.768275] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:16204198712692498656 len:57569 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.022 [2024-07-25 09:23:10.768300] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:58.022 #16 NEW cov: 12286 ft: 13737 corp: 7/221b lim: 105 exec/s: 0 rss: 72Mb L: 30/44 MS: 1 InsertByte- 00:07:58.022 [2024-07-25 09:23:10.818579] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:16204198712692498656 len:57569 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.022 [2024-07-25 09:23:10.818606] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:58.022 [2024-07-25 09:23:10.818644] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:6148914693565767765 len:21846 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.022 [2024-07-25 09:23:10.818658] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:58.280 #22 NEW cov: 12286 ft: 13897 corp: 8/265b lim: 105 exec/s: 0 rss: 72Mb L: 44/44 MS: 1 ChangeByte- 00:07:58.280 [2024-07-25 09:23:10.858642] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:16204198712692498656 len:57389 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.280 [2024-07-25 09:23:10.858666] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:58.280 [2024-07-25 09:23:10.858707] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:6148914693577695317 len:22358 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.280 [2024-07-25 09:23:10.858721] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:58.280 #23 NEW cov: 12286 ft: 13951 corp: 9/309b lim: 105 exec/s: 0 rss: 72Mb L: 44/44 MS: 1 ChangeBinInt- 00:07:58.280 [2024-07-25 09:23:10.898614] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:16204198712692498656 len:57569 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.280 [2024-07-25 09:23:10.898639] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:58.280 #24 NEW cov: 12286 ft: 13971 corp: 10/339b lim: 105 exec/s: 0 rss: 72Mb L: 30/44 MS: 1 ChangeBinInt- 00:07:58.280 [2024-07-25 09:23:10.948902] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:16204198712692498656 len:57569 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.280 [2024-07-25 09:23:10.948926] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:58.280 [2024-07-25 09:23:10.948968] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:6148914693577695456 len:21848 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.280 [2024-07-25 09:23:10.948982] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:58.280 #25 NEW cov: 12286 ft: 14065 corp: 11/384b lim: 105 exec/s: 0 rss: 72Mb L: 45/45 MS: 1 InsertByte- 00:07:58.280 [2024-07-25 09:23:10.989062] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:16204198712692498493 len:57569 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.280 [2024-07-25 09:23:10.989091] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:58.280 [2024-07-25 09:23:10.989144] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:16164920264849678560 len:21846 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.280 [2024-07-25 09:23:10.989157] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:58.280 #26 NEW cov: 12286 ft: 14136 corp: 12/430b lim: 105 exec/s: 0 rss: 72Mb L: 46/46 MS: 1 InsertByte- 00:07:58.280 [2024-07-25 09:23:11.039184] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:16204198712692498656 len:57569 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.280 [2024-07-25 09:23:11.039208] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:58.280 [2024-07-25 09:23:11.039248] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:6148914693565767765 len:21846 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.280 [2024-07-25 09:23:11.039262] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:58.280 #27 NEW cov: 12286 ft: 14153 corp: 13/475b lim: 105 exec/s: 0 rss: 72Mb L: 45/46 MS: 1 InsertByte- 00:07:58.538 [2024-07-25 09:23:11.089211] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:16204198712692498656 len:57582 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.538 [2024-07-25 09:23:11.089235] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:58.538 #28 NEW cov: 12286 ft: 14248 corp: 14/505b lim: 105 exec/s: 0 rss: 72Mb L: 30/46 MS: 1 ChangeByte- 00:07:58.538 [2024-07-25 09:23:11.139348] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:16204198712692498656 len:57582 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.538 [2024-07-25 09:23:11.139375] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:58.538 #29 NEW cov: 12286 ft: 14316 corp: 15/535b lim: 105 exec/s: 0 rss: 72Mb L: 30/46 MS: 1 ChangeBinInt- 00:07:58.538 [2024-07-25 09:23:11.189524] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:16164920261813002464 len:21846 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.538 [2024-07-25 09:23:11.189549] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:58.538 #30 NEW cov: 12286 ft: 14346 corp: 16/560b lim: 105 exec/s: 0 rss: 72Mb L: 25/46 MS: 1 EraseBytes- 00:07:58.538 [2024-07-25 09:23:11.239607] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:16204198712692498656 len:57569 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.538 [2024-07-25 09:23:11.239632] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:58.538 #31 NEW cov: 12286 ft: 14348 corp: 17/593b lim: 105 exec/s: 0 rss: 72Mb L: 33/46 MS: 1 EraseBytes- 00:07:58.538 [2024-07-25 09:23:11.289885] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:16204198712692498656 len:57569 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.538 [2024-07-25 09:23:11.289909] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:58.538 [2024-07-25 09:23:11.289950] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:6148914693565767765 len:21846 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.538 [2024-07-25 09:23:11.289964] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:58.538 NEW_FUNC[1/1]: 0x1a8a050 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:07:58.538 #32 NEW cov: 12309 ft: 14429 corp: 18/638b lim: 105 exec/s: 0 rss: 72Mb L: 45/46 MS: 1 ShuffleBytes- 00:07:58.538 [2024-07-25 09:23:11.330151] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:16204198712692498656 len:57569 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.538 [2024-07-25 09:23:11.330176] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:58.538 [2024-07-25 09:23:11.330229] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:6148914693577695317 len:22358 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.538 [2024-07-25 09:23:11.330242] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:58.538 [2024-07-25 09:23:11.330310] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:4485090715960753726 len:15935 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.538 [2024-07-25 09:23:11.330324] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:58.796 #33 NEW cov: 12309 ft: 14745 corp: 19/709b lim: 105 exec/s: 0 rss: 72Mb L: 71/71 MS: 1 InsertRepeatedBytes- 00:07:58.796 [2024-07-25 09:23:11.380009] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:16204198712692498656 len:57569 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.796 [2024-07-25 09:23:11.380039] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:58.796 #34 NEW cov: 12309 ft: 14763 corp: 20/736b lim: 105 exec/s: 34 rss: 72Mb L: 27/71 MS: 1 EraseBytes- 00:07:58.796 [2024-07-25 09:23:11.420137] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:16204198712692498656 len:57569 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.796 [2024-07-25 09:23:11.420163] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:58.796 #35 NEW cov: 12309 ft: 14800 corp: 21/764b lim: 105 exec/s: 35 rss: 73Mb L: 28/71 MS: 1 InsertByte- 00:07:58.797 [2024-07-25 09:23:11.470294] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:16204197948188319968 len:57569 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.797 [2024-07-25 09:23:11.470319] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:58.797 #36 NEW cov: 12309 ft: 14809 corp: 22/795b lim: 105 exec/s: 36 rss: 73Mb L: 31/71 MS: 1 InsertByte- 00:07:58.797 [2024-07-25 09:23:11.520516] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:16204198712692498656 len:57569 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.797 [2024-07-25 09:23:11.520542] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:58.797 [2024-07-25 09:23:11.520580] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:6148914693568550229 len:21846 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.797 [2024-07-25 09:23:11.520595] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:58.797 #37 NEW cov: 12309 ft: 14852 corp: 23/837b lim: 105 exec/s: 37 rss: 73Mb L: 42/71 MS: 1 EraseBytes- 00:07:58.797 [2024-07-25 09:23:11.560695] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:16204198712692498656 len:57569 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.797 [2024-07-25 09:23:11.560719] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:58.797 [2024-07-25 09:23:11.560760] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:6149068122682482773 len:57387 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.797 [2024-07-25 09:23:11.560775] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:58.797 #38 NEW cov: 12309 ft: 14862 corp: 24/894b lim: 105 exec/s: 38 rss: 73Mb L: 57/71 MS: 1 CopyPart- 00:07:58.797 [2024-07-25 09:23:11.600858] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:16204198712692498656 len:57569 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.797 [2024-07-25 09:23:11.600884] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:58.797 [2024-07-25 09:23:11.600925] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:6148914693565767765 len:21846 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.797 [2024-07-25 09:23:11.600939] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:59.056 #39 NEW cov: 12309 ft: 14866 corp: 25/938b lim: 105 exec/s: 39 rss: 73Mb L: 44/71 MS: 1 ChangeByte- 00:07:59.056 [2024-07-25 09:23:11.641045] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:16204198712692498656 len:57389 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.056 [2024-07-25 09:23:11.641074] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:59.056 [2024-07-25 09:23:11.641128] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:16204019495333847083 len:57569 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.056 [2024-07-25 09:23:11.641144] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:59.056 [2024-07-25 09:23:11.641197] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:16204198715714494688 len:21846 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.056 [2024-07-25 09:23:11.641211] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:59.056 #40 NEW cov: 12309 ft: 14913 corp: 26/1007b lim: 105 exec/s: 40 rss: 73Mb L: 69/71 MS: 1 CrossOver- 00:07:59.056 [2024-07-25 09:23:11.681018] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:16204198712692498656 len:57383 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.056 [2024-07-25 09:23:11.681042] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:59.056 [2024-07-25 09:23:11.681088] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:6148914693577648864 len:21846 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.056 [2024-07-25 09:23:11.681102] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:59.056 #41 NEW cov: 12309 ft: 14919 corp: 27/1053b lim: 105 exec/s: 41 rss: 73Mb L: 46/71 MS: 1 InsertByte- 00:07:59.056 [2024-07-25 09:23:11.720995] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:16204198712692498656 len:57569 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.056 [2024-07-25 09:23:11.721021] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:59.056 #42 NEW cov: 12309 ft: 14950 corp: 28/1094b lim: 105 exec/s: 42 rss: 73Mb L: 41/71 MS: 1 EraseBytes- 00:07:59.056 [2024-07-25 09:23:11.771154] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:16204198712692498656 len:57582 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.056 [2024-07-25 09:23:11.771178] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:59.056 #43 NEW cov: 12309 ft: 14990 corp: 29/1124b lim: 105 exec/s: 43 rss: 73Mb L: 30/71 MS: 1 ChangeBit- 00:07:59.056 [2024-07-25 09:23:11.811399] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:16204198712692498656 len:57569 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.056 [2024-07-25 09:23:11.811423] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:59.056 [2024-07-25 09:23:11.811471] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:16204198715729174752 len:57569 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.056 [2024-07-25 09:23:11.811485] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:59.056 #44 NEW cov: 12309 ft: 15004 corp: 30/1180b lim: 105 exec/s: 44 rss: 73Mb L: 56/71 MS: 1 CopyPart- 00:07:59.056 [2024-07-25 09:23:11.851614] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:16204198712692498656 len:57569 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.056 [2024-07-25 09:23:11.851639] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:59.056 [2024-07-25 09:23:11.851697] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:16204198715729174571 len:21846 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.056 [2024-07-25 09:23:11.851710] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:59.056 [2024-07-25 09:23:11.851767] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:16204198713388032224 len:57569 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.056 [2024-07-25 09:23:11.851783] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:59.315 #50 NEW cov: 12309 ft: 15027 corp: 31/1249b lim: 105 exec/s: 50 rss: 73Mb L: 69/71 MS: 1 CrossOver- 00:07:59.315 [2024-07-25 09:23:11.891499] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:16204198712692498656 len:57569 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.315 [2024-07-25 09:23:11.891525] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:59.315 #51 NEW cov: 12309 ft: 15045 corp: 32/1276b lim: 105 exec/s: 51 rss: 73Mb L: 27/71 MS: 1 ShuffleBytes- 00:07:59.315 [2024-07-25 09:23:11.931606] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:16204198712692498656 len:57569 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.315 [2024-07-25 09:23:11.931631] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:59.315 #52 NEW cov: 12309 ft: 15072 corp: 33/1300b lim: 105 exec/s: 52 rss: 73Mb L: 24/71 MS: 1 EraseBytes- 00:07:59.315 [2024-07-25 09:23:11.972053] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:16204198712692498656 len:57569 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.315 [2024-07-25 09:23:11.972081] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:59.315 [2024-07-25 09:23:11.972141] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:6148914693577695456 len:21848 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.315 [2024-07-25 09:23:11.972157] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:59.315 [2024-07-25 09:23:11.972210] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.315 [2024-07-25 09:23:11.972224] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:59.315 #53 NEW cov: 12309 ft: 15097 corp: 34/1367b lim: 105 exec/s: 53 rss: 73Mb L: 67/71 MS: 1 InsertRepeatedBytes- 00:07:59.315 [2024-07-25 09:23:12.011844] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:16141208567710540000 len:41088 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.315 [2024-07-25 09:23:12.011868] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:59.315 #54 NEW cov: 12309 ft: 15116 corp: 35/1408b lim: 105 exec/s: 54 rss: 74Mb L: 41/71 MS: 1 CMP- DE: "\001\027\254ZW\240\177N"- 00:07:59.315 [2024-07-25 09:23:12.062252] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:16204198712692498656 len:57569 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.315 [2024-07-25 09:23:12.062276] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:59.315 [2024-07-25 09:23:12.062326] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:6148914693577695456 len:21848 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.315 [2024-07-25 09:23:12.062338] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:59.315 [2024-07-25 09:23:12.062388] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.315 [2024-07-25 09:23:12.062401] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:59.315 #55 NEW cov: 12309 ft: 15125 corp: 36/1482b lim: 105 exec/s: 55 rss: 74Mb L: 74/74 MS: 1 InsertRepeatedBytes- 00:07:59.315 [2024-07-25 09:23:12.102258] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:16204198712692498656 len:57569 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.315 [2024-07-25 09:23:12.102285] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:59.315 [2024-07-25 09:23:12.102322] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:5644511533055058047 len:21846 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.315 [2024-07-25 09:23:12.102335] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:59.315 #56 NEW cov: 12309 ft: 15133 corp: 37/1526b lim: 105 exec/s: 56 rss: 74Mb L: 44/74 MS: 1 PersAutoDict- DE: "\001\027\254ZW\240\177N"- 00:07:59.574 [2024-07-25 09:23:12.142498] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:16204198712692498656 len:57569 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.574 [2024-07-25 09:23:12.142523] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:59.574 [2024-07-25 09:23:12.142576] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:6149046634973028437 len:22358 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.574 [2024-07-25 09:23:12.142590] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:59.574 [2024-07-25 09:23:12.142639] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:4485090715960753726 len:15935 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.574 [2024-07-25 09:23:12.142652] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:59.574 #57 NEW cov: 12309 ft: 15176 corp: 38/1597b lim: 105 exec/s: 57 rss: 74Mb L: 71/74 MS: 1 ChangeByte- 00:07:59.574 [2024-07-25 09:23:12.192532] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:16204198712692498656 len:57383 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.575 [2024-07-25 09:23:12.192557] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:59.575 [2024-07-25 09:23:12.192601] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:6148914693460208352 len:21846 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.575 [2024-07-25 09:23:12.192614] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:59.575 #58 NEW cov: 12309 ft: 15182 corp: 39/1643b lim: 105 exec/s: 58 rss: 74Mb L: 46/74 MS: 1 ChangeBinInt- 00:07:59.575 [2024-07-25 09:23:12.242646] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:16204198712692498656 len:57569 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.575 [2024-07-25 09:23:12.242670] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:59.575 [2024-07-25 09:23:12.242733] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:6148914693568550229 len:21846 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.575 [2024-07-25 09:23:12.242746] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:59.575 #59 NEW cov: 12309 ft: 15190 corp: 40/1686b lim: 105 exec/s: 59 rss: 74Mb L: 43/74 MS: 1 InsertByte- 00:07:59.575 [2024-07-25 09:23:12.282756] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:16204198712692498656 len:57569 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.575 [2024-07-25 09:23:12.282780] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:59.575 [2024-07-25 09:23:12.282827] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:6149068122682482773 len:57387 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.575 [2024-07-25 09:23:12.282840] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:59.575 #60 NEW cov: 12309 ft: 15204 corp: 41/1733b lim: 105 exec/s: 60 rss: 74Mb L: 47/74 MS: 1 EraseBytes- 00:07:59.575 [2024-07-25 09:23:12.333110] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:16204198712692498656 len:57569 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.575 [2024-07-25 09:23:12.333133] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:59.575 [2024-07-25 09:23:12.333182] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:6149068122682482773 len:57387 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.575 [2024-07-25 09:23:12.333193] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:59.575 [2024-07-25 09:23:12.333243] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:16204198713387996501 len:57569 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.575 [2024-07-25 09:23:12.333255] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:59.575 [2024-07-25 09:23:12.333306] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:16203998602280886496 len:21846 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.575 [2024-07-25 09:23:12.333318] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:59.575 #61 NEW cov: 12309 ft: 15716 corp: 42/1824b lim: 105 exec/s: 61 rss: 74Mb L: 91/91 MS: 1 CopyPart- 00:07:59.575 [2024-07-25 09:23:12.373104] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:16204198712692498656 len:57569 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.575 [2024-07-25 09:23:12.373129] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:59.575 [2024-07-25 09:23:12.373181] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:6148914693577695456 len:21848 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.575 [2024-07-25 09:23:12.373194] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:59.575 [2024-07-25 09:23:12.373248] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:2692697600 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.575 [2024-07-25 09:23:12.373261] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:59.833 #62 NEW cov: 12309 ft: 15733 corp: 43/1898b lim: 105 exec/s: 31 rss: 74Mb L: 74/91 MS: 1 PersAutoDict- DE: "\001\027\254ZW\240\177N"- 00:07:59.833 #62 DONE cov: 12309 ft: 15733 corp: 43/1898b lim: 105 exec/s: 31 rss: 74Mb 00:07:59.833 ###### Recommended dictionary. ###### 00:07:59.833 "\001\027\254ZW\240\177N" # Uses: 2 00:07:59.833 ###### End of recommended dictionary. ###### 00:07:59.833 Done 62 runs in 2 second(s) 00:07:59.833 09:23:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_16.conf /var/tmp/suppress_nvmf_fuzz 00:07:59.833 09:23:12 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:59.833 09:23:12 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:59.833 09:23:12 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 17 1 0x1 00:07:59.834 09:23:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=17 00:07:59.834 09:23:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:59.834 09:23:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:59.834 09:23:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 00:07:59.834 09:23:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_17.conf 00:07:59.834 09:23:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:59.834 09:23:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:59.834 09:23:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 17 00:07:59.834 09:23:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4417 00:07:59.834 09:23:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 00:07:59.834 09:23:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4417' 00:07:59.834 09:23:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4417"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:59.834 09:23:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:59.834 09:23:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:59.834 09:23:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4417' -c /tmp/fuzz_json_17.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 -Z 17 00:07:59.834 [2024-07-25 09:23:12.560440] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:59.834 [2024-07-25 09:23:12.560518] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid422509 ] 00:07:59.834 EAL: No free 2048 kB hugepages reported on node 1 00:08:00.091 [2024-07-25 09:23:12.724681] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:00.091 [2024-07-25 09:23:12.790339] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:00.091 [2024-07-25 09:23:12.848795] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:00.091 [2024-07-25 09:23:12.865026] tcp.c:1007:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4417 *** 00:08:00.091 INFO: Running with entropic power schedule (0xFF, 100). 00:08:00.091 INFO: Seed: 2451329681 00:08:00.349 INFO: Loaded 1 modules (359061 inline 8-bit counters): 359061 [0x29c768c, 0x2a1f121), 00:08:00.349 INFO: Loaded 1 PC tables (359061 PCs): 359061 [0x2a1f128,0x2f99a78), 00:08:00.349 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 00:08:00.349 INFO: A corpus is not provided, starting from an empty corpus 00:08:00.349 #2 INITED exec/s: 0 rss: 63Mb 00:08:00.349 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:00.349 This may also happen if the target rejected all inputs we tried so far 00:08:00.349 [2024-07-25 09:23:12.932730] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:1446803456593761300 len:5141 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.349 [2024-07-25 09:23:12.932766] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:00.349 [2024-07-25 09:23:12.932840] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:1446803456761533460 len:5141 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.349 [2024-07-25 09:23:12.932858] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:00.349 [2024-07-25 09:23:12.932923] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:1446803456761533460 len:5141 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.349 [2024-07-25 09:23:12.932939] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:00.349 [2024-07-25 09:23:12.933037] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:1446803456761533460 len:5141 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.349 [2024-07-25 09:23:12.933056] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:00.349 NEW_FUNC[1/701]: 0x49dcc0 in fuzz_nvm_write_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:540 00:08:00.349 NEW_FUNC[2/701]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:00.349 #3 NEW cov: 12094 ft: 12098 corp: 2/106b lim: 120 exec/s: 0 rss: 71Mb L: 105/105 MS: 1 InsertRepeatedBytes- 00:08:00.349 [2024-07-25 09:23:13.102320] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:1446803458438599700 len:5141 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.349 [2024-07-25 09:23:13.102362] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:00.349 NEW_FUNC[1/1]: 0x12fa250 in nvmf_poll_group_poll /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/nvmf.c:150 00:08:00.349 #8 NEW cov: 12216 ft: 13532 corp: 3/148b lim: 120 exec/s: 0 rss: 71Mb L: 42/105 MS: 5 ShuffleBytes-InsertByte-ChangeBit-ShuffleBytes-CrossOver- 00:08:00.606 [2024-07-25 09:23:13.162913] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:1446803458438599700 len:5141 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.606 [2024-07-25 09:23:13.162939] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:00.606 [2024-07-25 09:23:13.163008] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:1446803456761533460 len:5141 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.606 [2024-07-25 09:23:13.163023] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:00.606 #9 NEW cov: 12222 ft: 14130 corp: 4/199b lim: 120 exec/s: 0 rss: 72Mb L: 51/105 MS: 1 CopyPart- 00:08:00.606 [2024-07-25 09:23:13.222714] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:1446803458438599700 len:5141 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.606 [2024-07-25 09:23:13.222738] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:00.606 #10 NEW cov: 12307 ft: 14435 corp: 5/227b lim: 120 exec/s: 0 rss: 72Mb L: 28/105 MS: 1 EraseBytes- 00:08:00.606 [2024-07-25 09:23:13.273332] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:1446803458438599700 len:5141 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.606 [2024-07-25 09:23:13.273358] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:00.606 [2024-07-25 09:23:13.273420] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:1446803456761533460 len:5141 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.606 [2024-07-25 09:23:13.273436] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:00.606 #16 NEW cov: 12307 ft: 14531 corp: 6/278b lim: 120 exec/s: 0 rss: 72Mb L: 51/105 MS: 1 ChangeBinInt- 00:08:00.606 [2024-07-25 09:23:13.343847] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:1446803458438599700 len:5141 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.606 [2024-07-25 09:23:13.343872] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:00.606 [2024-07-25 09:23:13.343951] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:1446803456761533460 len:5141 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.606 [2024-07-25 09:23:13.343966] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:00.606 #17 NEW cov: 12307 ft: 14598 corp: 7/330b lim: 120 exec/s: 0 rss: 72Mb L: 52/105 MS: 1 InsertByte- 00:08:00.606 [2024-07-25 09:23:13.394186] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:1446803458438599700 len:5141 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.606 [2024-07-25 09:23:13.394212] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:00.606 [2024-07-25 09:23:13.394280] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:1446803456761533460 len:5141 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.606 [2024-07-25 09:23:13.394293] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:00.885 #18 NEW cov: 12307 ft: 14645 corp: 8/381b lim: 120 exec/s: 0 rss: 72Mb L: 51/105 MS: 1 ChangeByte- 00:08:00.885 [2024-07-25 09:23:13.444366] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:1446803458438599700 len:5141 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.885 [2024-07-25 09:23:13.444389] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:00.885 #19 NEW cov: 12307 ft: 14683 corp: 9/423b lim: 120 exec/s: 0 rss: 72Mb L: 42/105 MS: 1 ChangeBinInt- 00:08:00.885 [2024-07-25 09:23:13.494883] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744069599920127 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.885 [2024-07-25 09:23:13.494910] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:00.885 [2024-07-25 09:23:13.494971] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.885 [2024-07-25 09:23:13.494988] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:00.885 #22 NEW cov: 12307 ft: 14704 corp: 10/479b lim: 120 exec/s: 0 rss: 72Mb L: 56/105 MS: 3 ChangeBit-CopyPart-InsertRepeatedBytes- 00:08:00.885 [2024-07-25 09:23:13.544790] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:1446803458438599700 len:5141 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.885 [2024-07-25 09:23:13.544815] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:00.885 #23 NEW cov: 12307 ft: 14808 corp: 11/521b lim: 120 exec/s: 0 rss: 72Mb L: 42/105 MS: 1 ChangeBit- 00:08:00.885 [2024-07-25 09:23:13.596057] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:1446803456593761300 len:5141 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.885 [2024-07-25 09:23:13.596086] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:00.885 [2024-07-25 09:23:13.596201] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:1446803456761533460 len:5141 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.885 [2024-07-25 09:23:13.596218] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:00.885 [2024-07-25 09:23:13.596299] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:1446803456761533460 len:5141 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.885 [2024-07-25 09:23:13.596314] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:00.885 [2024-07-25 09:23:13.596402] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:1446803456761533460 len:5141 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.885 [2024-07-25 09:23:13.596422] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:00.885 #24 NEW cov: 12307 ft: 14848 corp: 12/628b lim: 120 exec/s: 0 rss: 72Mb L: 107/107 MS: 1 CopyPart- 00:08:00.885 [2024-07-25 09:23:13.656455] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:1446803456593761300 len:5141 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.885 [2024-07-25 09:23:13.656481] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:00.885 [2024-07-25 09:23:13.656578] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:1446803456761533460 len:5141 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.885 [2024-07-25 09:23:13.656591] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:00.885 [2024-07-25 09:23:13.656685] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:1446803456761533460 len:5141 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.885 [2024-07-25 09:23:13.656705] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:00.885 [2024-07-25 09:23:13.656801] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:1446803456761533460 len:5141 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.885 [2024-07-25 09:23:13.656819] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:01.185 #25 NEW cov: 12307 ft: 14864 corp: 13/743b lim: 120 exec/s: 0 rss: 72Mb L: 115/115 MS: 1 CrossOver- 00:08:01.185 [2024-07-25 09:23:13.726116] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:1446803458438599700 len:5141 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.185 [2024-07-25 09:23:13.726142] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:01.185 [2024-07-25 09:23:13.726228] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:1446803456761533460 len:5141 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.185 [2024-07-25 09:23:13.726244] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:01.185 #26 NEW cov: 12307 ft: 14901 corp: 14/795b lim: 120 exec/s: 0 rss: 72Mb L: 52/115 MS: 1 InsertByte- 00:08:01.185 [2024-07-25 09:23:13.786516] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:1446803458438599700 len:5141 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.185 [2024-07-25 09:23:13.786542] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:01.185 [2024-07-25 09:23:13.786602] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:1446803456761533460 len:5141 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.185 [2024-07-25 09:23:13.786618] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:01.185 NEW_FUNC[1/1]: 0x1a8a050 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:08:01.185 #27 NEW cov: 12330 ft: 14946 corp: 15/848b lim: 120 exec/s: 0 rss: 72Mb L: 53/115 MS: 1 InsertByte- 00:08:01.185 [2024-07-25 09:23:13.857477] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:1446803456593761300 len:5141 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.185 [2024-07-25 09:23:13.857501] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:01.185 [2024-07-25 09:23:13.857591] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:1446803456761533460 len:5141 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.185 [2024-07-25 09:23:13.857610] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:01.185 [2024-07-25 09:23:13.857695] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:1446803456761533460 len:5141 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.185 [2024-07-25 09:23:13.857711] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:01.185 #28 NEW cov: 12330 ft: 15242 corp: 16/941b lim: 120 exec/s: 0 rss: 72Mb L: 93/115 MS: 1 EraseBytes- 00:08:01.185 [2024-07-25 09:23:13.908052] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:1446803456593761300 len:5141 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.185 [2024-07-25 09:23:13.908084] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:01.185 [2024-07-25 09:23:13.908192] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:1446803456761549844 len:5141 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.185 [2024-07-25 09:23:13.908208] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:01.185 [2024-07-25 09:23:13.908288] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:1446803456761533460 len:5141 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.185 [2024-07-25 09:23:13.908300] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:01.185 [2024-07-25 09:23:13.908387] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:1446803456761533460 len:5141 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.185 [2024-07-25 09:23:13.908403] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:01.185 #29 NEW cov: 12330 ft: 15250 corp: 17/1056b lim: 120 exec/s: 29 rss: 72Mb L: 115/115 MS: 1 ChangeBit- 00:08:01.185 [2024-07-25 09:23:13.967343] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:1446803458438599700 len:5141 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.185 [2024-07-25 09:23:13.967369] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:01.458 #30 NEW cov: 12330 ft: 15271 corp: 18/1084b lim: 120 exec/s: 30 rss: 72Mb L: 28/115 MS: 1 EraseBytes- 00:08:01.458 [2024-07-25 09:23:14.027928] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:1446803458438599700 len:5141 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.458 [2024-07-25 09:23:14.027952] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:01.458 [2024-07-25 09:23:14.028016] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:1446803456761533460 len:5141 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.458 [2024-07-25 09:23:14.028034] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:01.458 #31 NEW cov: 12330 ft: 15298 corp: 19/1137b lim: 120 exec/s: 31 rss: 72Mb L: 53/115 MS: 1 ChangeBit- 00:08:01.458 [2024-07-25 09:23:14.078205] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:1446803458438599700 len:5141 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.458 [2024-07-25 09:23:14.078234] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:01.458 [2024-07-25 09:23:14.078312] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:335544320 len:5141 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.458 [2024-07-25 09:23:14.078327] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:01.458 #32 NEW cov: 12330 ft: 15310 corp: 20/1187b lim: 120 exec/s: 32 rss: 72Mb L: 50/115 MS: 1 CMP- DE: "\000\000\000\000\000\000\000\000"- 00:08:01.458 [2024-07-25 09:23:14.148015] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:1446803458438599700 len:5141 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.458 [2024-07-25 09:23:14.148042] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:01.458 #33 NEW cov: 12330 ft: 15353 corp: 21/1215b lim: 120 exec/s: 33 rss: 72Mb L: 28/115 MS: 1 CrossOver- 00:08:01.458 [2024-07-25 09:23:14.218305] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:1446803458438599700 len:5141 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.458 [2024-07-25 09:23:14.218345] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:01.458 #34 NEW cov: 12330 ft: 15364 corp: 22/1257b lim: 120 exec/s: 34 rss: 72Mb L: 42/115 MS: 1 CopyPart- 00:08:01.724 [2024-07-25 09:23:14.278637] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:1446803458438599700 len:5141 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.724 [2024-07-25 09:23:14.278666] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:01.724 #37 NEW cov: 12330 ft: 15381 corp: 23/1284b lim: 120 exec/s: 37 rss: 73Mb L: 27/115 MS: 3 EraseBytes-CopyPart-PersAutoDict- DE: "\000\000\000\000\000\000\000\000"- 00:08:01.724 [2024-07-25 09:23:14.349247] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:1446803458438599700 len:5141 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.724 [2024-07-25 09:23:14.349275] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:01.724 [2024-07-25 09:23:14.349366] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:1446803375157154836 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.724 [2024-07-25 09:23:14.349382] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:01.724 #38 NEW cov: 12330 ft: 15388 corp: 24/1336b lim: 120 exec/s: 38 rss: 73Mb L: 52/115 MS: 1 CMP- DE: "\001\000\000\000\000\000\000\000"- 00:08:01.724 [2024-07-25 09:23:14.399525] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:1446803458438599700 len:5141 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.724 [2024-07-25 09:23:14.399553] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:01.724 [2024-07-25 09:23:14.399628] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:1446803456761533460 len:5141 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.724 [2024-07-25 09:23:14.399644] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:01.724 #39 NEW cov: 12330 ft: 15404 corp: 25/1388b lim: 120 exec/s: 39 rss: 73Mb L: 52/115 MS: 1 CrossOver- 00:08:01.724 [2024-07-25 09:23:14.470412] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:1446803456593761300 len:5141 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.724 [2024-07-25 09:23:14.470439] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:01.724 [2024-07-25 09:23:14.470549] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:1446803456761533460 len:5141 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.724 [2024-07-25 09:23:14.470562] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:01.724 [2024-07-25 09:23:14.470649] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:1446803456761533460 len:5141 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.724 [2024-07-25 09:23:14.470662] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:01.724 [2024-07-25 09:23:14.470751] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:1446803456761533460 len:5141 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.724 [2024-07-25 09:23:14.470768] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:01.724 #40 NEW cov: 12330 ft: 15434 corp: 26/1489b lim: 120 exec/s: 40 rss: 73Mb L: 101/115 MS: 1 PersAutoDict- DE: "\001\000\000\000\000\000\000\000"- 00:08:02.007 [2024-07-25 09:23:14.539515] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:1446804407626372116 len:5141 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.007 [2024-07-25 09:23:14.539548] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:02.007 #41 NEW cov: 12330 ft: 15442 corp: 27/1517b lim: 120 exec/s: 41 rss: 73Mb L: 28/115 MS: 1 ChangeBinInt- 00:08:02.007 [2024-07-25 09:23:14.590349] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:1446803458438599700 len:5141 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.007 [2024-07-25 09:23:14.590375] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:02.007 [2024-07-25 09:23:14.590460] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:180388626432 len:31355 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.007 [2024-07-25 09:23:14.590475] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:02.007 [2024-07-25 09:23:14.590564] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:8825501086245354106 len:31355 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.007 [2024-07-25 09:23:14.590585] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:02.007 #42 NEW cov: 12330 ft: 15472 corp: 28/1590b lim: 120 exec/s: 42 rss: 73Mb L: 73/115 MS: 1 InsertRepeatedBytes- 00:08:02.007 [2024-07-25 09:23:14.651127] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:1446803456593761300 len:5141 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.007 [2024-07-25 09:23:14.651152] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:02.007 [2024-07-25 09:23:14.651242] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:1446803456761533460 len:5141 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.007 [2024-07-25 09:23:14.651257] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:02.007 [2024-07-25 09:23:14.651321] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:1446803456761533460 len:5141 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.007 [2024-07-25 09:23:14.651334] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:02.007 [2024-07-25 09:23:14.651396] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:1446803456761533460 len:5141 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.007 [2024-07-25 09:23:14.651417] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:02.007 #43 NEW cov: 12330 ft: 15477 corp: 29/1691b lim: 120 exec/s: 43 rss: 73Mb L: 101/115 MS: 1 ChangeBinInt- 00:08:02.007 [2024-07-25 09:23:14.711485] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:1446803456593761300 len:5141 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.007 [2024-07-25 09:23:14.711512] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:02.007 [2024-07-25 09:23:14.711610] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:1446803456761533460 len:5141 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.007 [2024-07-25 09:23:14.711625] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:02.007 [2024-07-25 09:23:14.711710] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:1446803456761533460 len:5141 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.007 [2024-07-25 09:23:14.711724] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:02.007 [2024-07-25 09:23:14.711817] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:1446803456761533460 len:5141 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.007 [2024-07-25 09:23:14.711836] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:02.007 #44 NEW cov: 12330 ft: 15489 corp: 30/1792b lim: 120 exec/s: 44 rss: 73Mb L: 101/115 MS: 1 ShuffleBytes- 00:08:02.007 [2024-07-25 09:23:14.781388] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:2383552180931662868 len:5141 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.007 [2024-07-25 09:23:14.781414] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:02.007 [2024-07-25 09:23:14.781473] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:1446803456761533460 len:5141 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.007 [2024-07-25 09:23:14.781491] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:02.283 #45 NEW cov: 12330 ft: 15500 corp: 31/1845b lim: 120 exec/s: 45 rss: 73Mb L: 53/115 MS: 1 InsertByte- 00:08:02.283 [2024-07-25 09:23:14.841611] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:1446803458438599700 len:5141 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.283 [2024-07-25 09:23:14.841637] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:02.283 [2024-07-25 09:23:14.841699] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:12471615344564507668 len:5141 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.283 [2024-07-25 09:23:14.841716] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:02.283 #46 NEW cov: 12330 ft: 15518 corp: 32/1897b lim: 120 exec/s: 46 rss: 73Mb L: 52/115 MS: 1 InsertByte- 00:08:02.283 [2024-07-25 09:23:14.892802] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:1446803456593761300 len:5141 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.283 [2024-07-25 09:23:14.892830] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:02.283 [2024-07-25 09:23:14.892934] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:1446803456761533460 len:5141 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.283 [2024-07-25 09:23:14.892952] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:02.283 [2024-07-25 09:23:14.893030] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:1446803456761533460 len:5141 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.283 [2024-07-25 09:23:14.893042] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:02.283 [2024-07-25 09:23:14.893124] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:1446803456761533460 len:5141 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.283 [2024-07-25 09:23:14.893139] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:02.283 #47 NEW cov: 12330 ft: 15527 corp: 33/2012b lim: 120 exec/s: 23 rss: 74Mb L: 115/115 MS: 1 ChangeBinInt- 00:08:02.283 #47 DONE cov: 12330 ft: 15527 corp: 33/2012b lim: 120 exec/s: 23 rss: 74Mb 00:08:02.283 ###### Recommended dictionary. ###### 00:08:02.283 "\000\000\000\000\000\000\000\000" # Uses: 1 00:08:02.283 "\001\000\000\000\000\000\000\000" # Uses: 1 00:08:02.283 ###### End of recommended dictionary. ###### 00:08:02.283 Done 47 runs in 2 second(s) 00:08:02.283 09:23:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_17.conf /var/tmp/suppress_nvmf_fuzz 00:08:02.283 09:23:15 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:02.283 09:23:15 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:02.283 09:23:15 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 18 1 0x1 00:08:02.283 09:23:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=18 00:08:02.283 09:23:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:02.283 09:23:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:02.283 09:23:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 00:08:02.283 09:23:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_18.conf 00:08:02.283 09:23:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:02.283 09:23:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:02.283 09:23:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 18 00:08:02.283 09:23:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4418 00:08:02.283 09:23:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 00:08:02.283 09:23:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4418' 00:08:02.283 09:23:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4418"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:02.283 09:23:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:02.283 09:23:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:02.283 09:23:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4418' -c /tmp/fuzz_json_18.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 -Z 18 00:08:02.283 [2024-07-25 09:23:15.063712] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:02.283 [2024-07-25 09:23:15.063772] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid422946 ] 00:08:02.283 EAL: No free 2048 kB hugepages reported on node 1 00:08:02.541 [2024-07-25 09:23:15.229785] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:02.541 [2024-07-25 09:23:15.294256] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:02.799 [2024-07-25 09:23:15.352569] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:02.799 [2024-07-25 09:23:15.368763] tcp.c:1007:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4418 *** 00:08:02.799 INFO: Running with entropic power schedule (0xFF, 100). 00:08:02.799 INFO: Seed: 660358244 00:08:02.799 INFO: Loaded 1 modules (359061 inline 8-bit counters): 359061 [0x29c768c, 0x2a1f121), 00:08:02.799 INFO: Loaded 1 PC tables (359061 PCs): 359061 [0x2a1f128,0x2f99a78), 00:08:02.799 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 00:08:02.799 INFO: A corpus is not provided, starting from an empty corpus 00:08:02.799 #2 INITED exec/s: 0 rss: 64Mb 00:08:02.799 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:02.799 This may also happen if the target rejected all inputs we tried so far 00:08:02.799 [2024-07-25 09:23:15.413499] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:02.799 [2024-07-25 09:23:15.413531] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:02.799 [2024-07-25 09:23:15.413561] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:02.799 [2024-07-25 09:23:15.413577] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:02.799 [2024-07-25 09:23:15.413608] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:02.799 [2024-07-25 09:23:15.413625] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:02.799 NEW_FUNC[1/700]: 0x4a15b0 in fuzz_nvm_write_zeroes_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:562 00:08:02.799 NEW_FUNC[2/700]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:02.800 #8 NEW cov: 12046 ft: 12045 corp: 2/61b lim: 100 exec/s: 0 rss: 71Mb L: 60/60 MS: 1 InsertRepeatedBytes- 00:08:02.800 [2024-07-25 09:23:15.593923] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:02.800 [2024-07-25 09:23:15.593959] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:02.800 [2024-07-25 09:23:15.593990] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:02.800 [2024-07-25 09:23:15.594004] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:02.800 [2024-07-25 09:23:15.594037] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:02.800 [2024-07-25 09:23:15.594049] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:03.058 #14 NEW cov: 12159 ft: 12556 corp: 3/122b lim: 100 exec/s: 0 rss: 71Mb L: 61/61 MS: 1 InsertByte- 00:08:03.058 [2024-07-25 09:23:15.684076] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:03.058 [2024-07-25 09:23:15.684106] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:03.058 [2024-07-25 09:23:15.684135] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:03.058 [2024-07-25 09:23:15.684148] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:03.058 [2024-07-25 09:23:15.684175] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:03.058 [2024-07-25 09:23:15.684187] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:03.058 #20 NEW cov: 12165 ft: 12759 corp: 4/191b lim: 100 exec/s: 0 rss: 71Mb L: 69/69 MS: 1 CMP- DE: "\000\000\000\000\000\000\000\000"- 00:08:03.058 [2024-07-25 09:23:15.764272] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:03.058 [2024-07-25 09:23:15.764300] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:03.058 [2024-07-25 09:23:15.764329] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:03.058 [2024-07-25 09:23:15.764342] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:03.058 [2024-07-25 09:23:15.764368] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:03.058 [2024-07-25 09:23:15.764380] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:03.058 #21 NEW cov: 12250 ft: 13049 corp: 5/260b lim: 100 exec/s: 0 rss: 72Mb L: 69/69 MS: 1 ShuffleBytes- 00:08:03.058 [2024-07-25 09:23:15.844474] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:03.058 [2024-07-25 09:23:15.844499] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:03.058 [2024-07-25 09:23:15.844543] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:03.058 [2024-07-25 09:23:15.844556] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:03.058 [2024-07-25 09:23:15.844587] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:03.058 [2024-07-25 09:23:15.844599] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:03.316 #22 NEW cov: 12250 ft: 13096 corp: 6/337b lim: 100 exec/s: 0 rss: 72Mb L: 77/77 MS: 1 PersAutoDict- DE: "\000\000\000\000\000\000\000\000"- 00:08:03.316 [2024-07-25 09:23:15.924662] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:03.316 [2024-07-25 09:23:15.924687] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:03.316 [2024-07-25 09:23:15.924730] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:03.316 [2024-07-25 09:23:15.924744] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:03.316 [2024-07-25 09:23:15.924771] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:03.316 [2024-07-25 09:23:15.924783] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:03.316 #23 NEW cov: 12250 ft: 13312 corp: 7/397b lim: 100 exec/s: 0 rss: 72Mb L: 60/77 MS: 1 ShuffleBytes- 00:08:03.316 [2024-07-25 09:23:15.984782] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:03.316 [2024-07-25 09:23:15.984808] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:03.316 [2024-07-25 09:23:15.984854] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:03.316 [2024-07-25 09:23:15.984868] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:03.316 #30 NEW cov: 12250 ft: 13615 corp: 8/451b lim: 100 exec/s: 0 rss: 72Mb L: 54/77 MS: 2 ChangeBit-InsertRepeatedBytes- 00:08:03.316 [2024-07-25 09:23:16.044965] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:03.316 [2024-07-25 09:23:16.044990] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:03.316 [2024-07-25 09:23:16.045033] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:03.316 [2024-07-25 09:23:16.045046] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:03.316 [2024-07-25 09:23:16.045080] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:03.316 [2024-07-25 09:23:16.045093] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:03.316 #31 NEW cov: 12250 ft: 13724 corp: 9/528b lim: 100 exec/s: 0 rss: 72Mb L: 77/77 MS: 1 ChangeBit- 00:08:03.574 [2024-07-25 09:23:16.125229] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:03.574 [2024-07-25 09:23:16.125256] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:03.574 [2024-07-25 09:23:16.125285] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:03.574 [2024-07-25 09:23:16.125299] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:03.574 [2024-07-25 09:23:16.125326] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:03.574 [2024-07-25 09:23:16.125338] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:03.574 #32 NEW cov: 12250 ft: 13769 corp: 10/597b lim: 100 exec/s: 0 rss: 72Mb L: 69/77 MS: 1 CMP- DE: "\011\000"- 00:08:03.575 [2024-07-25 09:23:16.175311] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:03.575 [2024-07-25 09:23:16.175341] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:03.575 [2024-07-25 09:23:16.175369] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:03.575 [2024-07-25 09:23:16.175383] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:03.575 [2024-07-25 09:23:16.175409] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:03.575 [2024-07-25 09:23:16.175421] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:03.575 #33 NEW cov: 12250 ft: 13813 corp: 11/675b lim: 100 exec/s: 0 rss: 72Mb L: 78/78 MS: 1 InsertByte- 00:08:03.575 [2024-07-25 09:23:16.255451] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:03.575 [2024-07-25 09:23:16.255476] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:03.575 [2024-07-25 09:23:16.255520] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:03.575 [2024-07-25 09:23:16.255534] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:03.575 NEW_FUNC[1/1]: 0x1a8a050 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:08:03.575 #34 NEW cov: 12273 ft: 13854 corp: 12/729b lim: 100 exec/s: 0 rss: 72Mb L: 54/78 MS: 1 ShuffleBytes- 00:08:03.575 [2024-07-25 09:23:16.335697] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:03.575 [2024-07-25 09:23:16.335723] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:03.575 [2024-07-25 09:23:16.335766] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:03.575 [2024-07-25 09:23:16.335780] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:03.575 [2024-07-25 09:23:16.335807] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:03.575 [2024-07-25 09:23:16.335820] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:03.575 #35 NEW cov: 12273 ft: 13873 corp: 13/803b lim: 100 exec/s: 0 rss: 72Mb L: 74/78 MS: 1 EraseBytes- 00:08:03.833 [2024-07-25 09:23:16.385875] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:03.833 [2024-07-25 09:23:16.385902] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:03.833 [2024-07-25 09:23:16.385931] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:03.833 [2024-07-25 09:23:16.385945] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:03.833 [2024-07-25 09:23:16.385972] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:03.833 [2024-07-25 09:23:16.385985] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:03.833 #36 NEW cov: 12273 ft: 13989 corp: 14/874b lim: 100 exec/s: 36 rss: 72Mb L: 71/78 MS: 1 InsertRepeatedBytes- 00:08:03.833 [2024-07-25 09:23:16.446022] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:03.833 [2024-07-25 09:23:16.446048] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:03.833 [2024-07-25 09:23:16.446097] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:03.833 [2024-07-25 09:23:16.446116] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:03.833 [2024-07-25 09:23:16.446146] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:03.833 [2024-07-25 09:23:16.446158] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:03.833 [2024-07-25 09:23:16.446182] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:08:03.833 [2024-07-25 09:23:16.446194] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:03.833 #37 NEW cov: 12273 ft: 14265 corp: 15/956b lim: 100 exec/s: 37 rss: 72Mb L: 82/82 MS: 1 PersAutoDict- DE: "\000\000\000\000\000\000\000\000"- 00:08:03.833 [2024-07-25 09:23:16.526288] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:03.833 [2024-07-25 09:23:16.526324] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:03.833 [2024-07-25 09:23:16.526368] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:03.833 [2024-07-25 09:23:16.526381] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:03.833 [2024-07-25 09:23:16.526408] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:03.833 [2024-07-25 09:23:16.526420] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:03.833 [2024-07-25 09:23:16.526444] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:08:03.833 [2024-07-25 09:23:16.526456] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:03.833 #38 NEW cov: 12273 ft: 14281 corp: 16/1042b lim: 100 exec/s: 38 rss: 72Mb L: 86/86 MS: 1 CMP- DE: "\001\000\000\000\000\000\003\377"- 00:08:03.833 [2024-07-25 09:23:16.606491] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:03.833 [2024-07-25 09:23:16.606517] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:03.833 [2024-07-25 09:23:16.606560] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:03.833 [2024-07-25 09:23:16.606573] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:03.833 [2024-07-25 09:23:16.606600] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:03.833 [2024-07-25 09:23:16.606612] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:03.833 [2024-07-25 09:23:16.606636] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:08:03.833 [2024-07-25 09:23:16.606649] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:04.092 #39 NEW cov: 12273 ft: 14308 corp: 17/1127b lim: 100 exec/s: 39 rss: 72Mb L: 85/86 MS: 1 PersAutoDict- DE: "\000\000\000\000\000\000\000\000"- 00:08:04.092 [2024-07-25 09:23:16.666613] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:04.092 [2024-07-25 09:23:16.666639] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:04.092 [2024-07-25 09:23:16.666667] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:04.092 [2024-07-25 09:23:16.666680] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:04.092 [2024-07-25 09:23:16.666706] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:04.092 [2024-07-25 09:23:16.666725] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:04.092 #40 NEW cov: 12273 ft: 14346 corp: 18/1202b lim: 100 exec/s: 40 rss: 72Mb L: 75/86 MS: 1 InsertByte- 00:08:04.092 [2024-07-25 09:23:16.726766] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:04.092 [2024-07-25 09:23:16.726794] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:04.092 [2024-07-25 09:23:16.726822] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:04.092 [2024-07-25 09:23:16.726836] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:04.092 [2024-07-25 09:23:16.726863] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:04.092 [2024-07-25 09:23:16.726876] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:04.092 #41 NEW cov: 12273 ft: 14365 corp: 19/1263b lim: 100 exec/s: 41 rss: 72Mb L: 61/86 MS: 1 ChangeByte- 00:08:04.092 [2024-07-25 09:23:16.786921] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:04.092 [2024-07-25 09:23:16.786948] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:04.092 [2024-07-25 09:23:16.786991] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:04.092 [2024-07-25 09:23:16.787005] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:04.092 [2024-07-25 09:23:16.787035] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:04.092 [2024-07-25 09:23:16.787047] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:04.092 [2024-07-25 09:23:16.787078] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:08:04.092 [2024-07-25 09:23:16.787091] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:04.092 #42 NEW cov: 12273 ft: 14442 corp: 20/1356b lim: 100 exec/s: 42 rss: 72Mb L: 93/93 MS: 1 CMP- DE: "\000\000\000\000\000\000\000\017"- 00:08:04.092 [2024-07-25 09:23:16.867115] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:04.092 [2024-07-25 09:23:16.867142] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:04.092 [2024-07-25 09:23:16.867171] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:04.092 [2024-07-25 09:23:16.867185] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:04.092 [2024-07-25 09:23:16.867210] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:04.092 [2024-07-25 09:23:16.867222] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:04.350 #43 NEW cov: 12273 ft: 14454 corp: 21/1425b lim: 100 exec/s: 43 rss: 72Mb L: 69/93 MS: 1 EraseBytes- 00:08:04.350 [2024-07-25 09:23:16.947386] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:04.350 [2024-07-25 09:23:16.947413] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:04.350 [2024-07-25 09:23:16.947456] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:04.350 [2024-07-25 09:23:16.947469] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:04.350 [2024-07-25 09:23:16.947499] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:04.350 [2024-07-25 09:23:16.947511] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:04.350 [2024-07-25 09:23:16.947535] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:08:04.350 [2024-07-25 09:23:16.947547] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:04.350 #44 NEW cov: 12273 ft: 14519 corp: 22/1508b lim: 100 exec/s: 44 rss: 72Mb L: 83/93 MS: 1 PersAutoDict- DE: "\000\000\000\000\000\000\000\000"- 00:08:04.350 [2024-07-25 09:23:17.027490] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:04.350 [2024-07-25 09:23:17.027516] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:04.350 [2024-07-25 09:23:17.027560] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:04.350 [2024-07-25 09:23:17.027574] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:04.350 #45 NEW cov: 12273 ft: 14594 corp: 23/1555b lim: 100 exec/s: 45 rss: 72Mb L: 47/93 MS: 1 EraseBytes- 00:08:04.350 [2024-07-25 09:23:17.087694] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:04.350 [2024-07-25 09:23:17.087720] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:04.350 [2024-07-25 09:23:17.087749] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:04.350 [2024-07-25 09:23:17.087762] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:04.350 [2024-07-25 09:23:17.087788] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:04.350 [2024-07-25 09:23:17.087800] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:04.350 #46 NEW cov: 12273 ft: 14658 corp: 24/1630b lim: 100 exec/s: 46 rss: 72Mb L: 75/93 MS: 1 ChangeBit- 00:08:04.350 [2024-07-25 09:23:17.147845] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:04.350 [2024-07-25 09:23:17.147870] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:04.350 [2024-07-25 09:23:17.147913] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:04.350 [2024-07-25 09:23:17.147926] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:04.350 [2024-07-25 09:23:17.147954] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:04.350 [2024-07-25 09:23:17.147966] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:04.608 #47 NEW cov: 12273 ft: 14668 corp: 25/1691b lim: 100 exec/s: 47 rss: 72Mb L: 61/93 MS: 1 ShuffleBytes- 00:08:04.609 [2024-07-25 09:23:17.228088] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:04.609 [2024-07-25 09:23:17.228113] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:04.609 [2024-07-25 09:23:17.228156] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:04.609 [2024-07-25 09:23:17.228169] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:04.609 [2024-07-25 09:23:17.228195] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:04.609 [2024-07-25 09:23:17.228207] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:04.609 [2024-07-25 09:23:17.228235] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:08:04.609 [2024-07-25 09:23:17.228247] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:04.609 #48 NEW cov: 12273 ft: 14674 corp: 26/1784b lim: 100 exec/s: 48 rss: 72Mb L: 93/93 MS: 1 CopyPart- 00:08:04.609 [2024-07-25 09:23:17.288280] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:04.609 [2024-07-25 09:23:17.288307] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:04.609 [2024-07-25 09:23:17.288352] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:04.609 [2024-07-25 09:23:17.288366] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:04.609 [2024-07-25 09:23:17.288392] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:04.609 [2024-07-25 09:23:17.288405] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:04.609 [2024-07-25 09:23:17.288429] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:08:04.609 [2024-07-25 09:23:17.288442] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:04.609 #49 NEW cov: 12273 ft: 14711 corp: 27/1866b lim: 100 exec/s: 49 rss: 72Mb L: 82/93 MS: 1 PersAutoDict- DE: "\000\000\000\000\000\000\000\000"- 00:08:04.609 [2024-07-25 09:23:17.338368] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:04.609 [2024-07-25 09:23:17.338394] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:04.609 [2024-07-25 09:23:17.338423] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:04.609 [2024-07-25 09:23:17.338436] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:04.609 [2024-07-25 09:23:17.338462] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:04.609 [2024-07-25 09:23:17.338474] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:04.609 #50 NEW cov: 12273 ft: 14716 corp: 28/1943b lim: 100 exec/s: 50 rss: 72Mb L: 77/93 MS: 1 ChangeBit- 00:08:04.609 [2024-07-25 09:23:17.388500] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:04.609 [2024-07-25 09:23:17.388526] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:04.609 [2024-07-25 09:23:17.388555] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:04.609 [2024-07-25 09:23:17.388568] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:04.609 [2024-07-25 09:23:17.388594] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:04.609 [2024-07-25 09:23:17.388606] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:04.868 #51 NEW cov: 12273 ft: 14730 corp: 29/2017b lim: 100 exec/s: 25 rss: 72Mb L: 74/93 MS: 1 CrossOver- 00:08:04.868 #51 DONE cov: 12273 ft: 14730 corp: 29/2017b lim: 100 exec/s: 25 rss: 72Mb 00:08:04.868 ###### Recommended dictionary. ###### 00:08:04.868 "\000\000\000\000\000\000\000\000" # Uses: 5 00:08:04.868 "\011\000" # Uses: 0 00:08:04.868 "\001\000\000\000\000\000\003\377" # Uses: 0 00:08:04.868 "\000\000\000\000\000\000\000\017" # Uses: 0 00:08:04.868 ###### End of recommended dictionary. ###### 00:08:04.868 Done 51 runs in 2 second(s) 00:08:04.868 09:23:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_18.conf /var/tmp/suppress_nvmf_fuzz 00:08:04.868 09:23:17 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:04.868 09:23:17 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:04.868 09:23:17 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 19 1 0x1 00:08:04.868 09:23:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=19 00:08:04.868 09:23:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:04.868 09:23:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:04.868 09:23:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 00:08:04.868 09:23:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_19.conf 00:08:04.868 09:23:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:04.868 09:23:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:04.868 09:23:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 19 00:08:04.868 09:23:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4419 00:08:04.868 09:23:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 00:08:04.868 09:23:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4419' 00:08:04.868 09:23:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4419"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:04.868 09:23:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:04.868 09:23:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:04.868 09:23:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4419' -c /tmp/fuzz_json_19.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 -Z 19 00:08:04.868 [2024-07-25 09:23:17.573346] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:04.868 [2024-07-25 09:23:17.573423] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid423374 ] 00:08:04.868 EAL: No free 2048 kB hugepages reported on node 1 00:08:05.125 [2024-07-25 09:23:17.735766] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:05.125 [2024-07-25 09:23:17.799699] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:05.125 [2024-07-25 09:23:17.858106] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:05.125 [2024-07-25 09:23:17.874342] tcp.c:1007:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4419 *** 00:08:05.125 INFO: Running with entropic power schedule (0xFF, 100). 00:08:05.125 INFO: Seed: 3166355914 00:08:05.125 INFO: Loaded 1 modules (359061 inline 8-bit counters): 359061 [0x29c768c, 0x2a1f121), 00:08:05.125 INFO: Loaded 1 PC tables (359061 PCs): 359061 [0x2a1f128,0x2f99a78), 00:08:05.125 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 00:08:05.126 INFO: A corpus is not provided, starting from an empty corpus 00:08:05.126 #2 INITED exec/s: 0 rss: 63Mb 00:08:05.126 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:05.126 This may also happen if the target rejected all inputs we tried so far 00:08:05.126 [2024-07-25 09:23:17.919874] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:14902075601355460302 len:52943 00:08:05.126 [2024-07-25 09:23:17.919902] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:05.126 [2024-07-25 09:23:17.919939] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:14902075604643794638 len:52943 00:08:05.126 [2024-07-25 09:23:17.919953] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:05.126 [2024-07-25 09:23:17.920001] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:14902075604643794638 len:52943 00:08:05.126 [2024-07-25 09:23:17.920013] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:05.126 [2024-07-25 09:23:17.920065] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:14902075604643794638 len:52943 00:08:05.126 [2024-07-25 09:23:17.920084] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:05.383 NEW_FUNC[1/700]: 0x4a4570 in fuzz_nvm_write_uncorrectable_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:582 00:08:05.383 NEW_FUNC[2/700]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:05.383 #3 NEW cov: 12024 ft: 12023 corp: 2/48b lim: 50 exec/s: 0 rss: 70Mb L: 47/47 MS: 1 InsertRepeatedBytes- 00:08:05.383 [2024-07-25 09:23:18.070103] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:14902075601355460302 len:52943 00:08:05.383 [2024-07-25 09:23:18.070147] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:05.383 [2024-07-25 09:23:18.070223] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:14902075604643794638 len:52943 00:08:05.383 [2024-07-25 09:23:18.070242] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:05.383 #4 NEW cov: 12137 ft: 12922 corp: 3/75b lim: 50 exec/s: 0 rss: 70Mb L: 27/47 MS: 1 CrossOver- 00:08:05.383 [2024-07-25 09:23:18.130197] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:7089336938131513954 len:25187 00:08:05.383 [2024-07-25 09:23:18.130224] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:05.383 [2024-07-25 09:23:18.130266] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:7089336938131513954 len:25187 00:08:05.383 [2024-07-25 09:23:18.130280] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:05.383 #5 NEW cov: 12143 ft: 13125 corp: 4/103b lim: 50 exec/s: 0 rss: 70Mb L: 28/47 MS: 1 InsertRepeatedBytes- 00:08:05.383 [2024-07-25 09:23:18.170255] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:14902075601355460302 len:52943 00:08:05.383 [2024-07-25 09:23:18.170280] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:05.383 [2024-07-25 09:23:18.170318] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:14902075604643794638 len:52943 00:08:05.383 [2024-07-25 09:23:18.170331] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:05.641 #6 NEW cov: 12228 ft: 13295 corp: 5/130b lim: 50 exec/s: 0 rss: 71Mb L: 27/47 MS: 1 CrossOver- 00:08:05.641 [2024-07-25 09:23:18.220345] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:14902075604643794638 len:52943 00:08:05.642 [2024-07-25 09:23:18.220369] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:05.642 [2024-07-25 09:23:18.220407] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:14902075604643794638 len:52943 00:08:05.642 [2024-07-25 09:23:18.220424] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:05.642 #7 NEW cov: 12228 ft: 13445 corp: 6/157b lim: 50 exec/s: 0 rss: 71Mb L: 27/47 MS: 1 CrossOver- 00:08:05.642 [2024-07-25 09:23:18.260470] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:14902075604643794638 len:52943 00:08:05.642 [2024-07-25 09:23:18.260494] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:05.642 [2024-07-25 09:23:18.260536] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:14902075604643794638 len:52943 00:08:05.642 [2024-07-25 09:23:18.260550] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:05.642 #8 NEW cov: 12228 ft: 13657 corp: 7/184b lim: 50 exec/s: 0 rss: 71Mb L: 27/47 MS: 1 ChangeBit- 00:08:05.642 [2024-07-25 09:23:18.310537] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:14902075601355460302 len:52943 00:08:05.642 [2024-07-25 09:23:18.310563] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:05.642 #9 NEW cov: 12228 ft: 14034 corp: 8/201b lim: 50 exec/s: 0 rss: 71Mb L: 17/47 MS: 1 EraseBytes- 00:08:05.642 [2024-07-25 09:23:18.361030] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:7089336938131513954 len:25187 00:08:05.642 [2024-07-25 09:23:18.361055] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:05.642 [2024-07-25 09:23:18.361113] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:7089336938131513954 len:25187 00:08:05.642 [2024-07-25 09:23:18.361126] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:05.642 [2024-07-25 09:23:18.361181] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:7089336938131513954 len:25187 00:08:05.642 [2024-07-25 09:23:18.361195] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:05.642 [2024-07-25 09:23:18.361248] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:7089336938131513954 len:25187 00:08:05.642 [2024-07-25 09:23:18.361261] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:05.642 #10 NEW cov: 12228 ft: 14164 corp: 9/243b lim: 50 exec/s: 0 rss: 71Mb L: 42/47 MS: 1 CopyPart- 00:08:05.642 [2024-07-25 09:23:18.410765] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:14902075604643794638 len:52943 00:08:05.642 [2024-07-25 09:23:18.410790] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:05.642 #11 NEW cov: 12228 ft: 14298 corp: 10/260b lim: 50 exec/s: 0 rss: 71Mb L: 17/47 MS: 1 EraseBytes- 00:08:05.900 [2024-07-25 09:23:18.450888] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:14902075601355407624 len:52943 00:08:05.900 [2024-07-25 09:23:18.450913] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:05.900 #12 NEW cov: 12228 ft: 14384 corp: 11/279b lim: 50 exec/s: 0 rss: 71Mb L: 19/47 MS: 1 CMP- DE: "\001\010"- 00:08:05.900 [2024-07-25 09:23:18.501267] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:14902075601355460302 len:10024 00:08:05.900 [2024-07-25 09:23:18.501292] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:05.900 [2024-07-25 09:23:18.501345] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:14902075601831012302 len:52943 00:08:05.900 [2024-07-25 09:23:18.501361] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:05.900 [2024-07-25 09:23:18.501412] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:14901860100364750542 len:52943 00:08:05.900 [2024-07-25 09:23:18.501426] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:05.900 #13 NEW cov: 12228 ft: 14619 corp: 12/311b lim: 50 exec/s: 0 rss: 71Mb L: 32/47 MS: 1 InsertRepeatedBytes- 00:08:05.900 [2024-07-25 09:23:18.541130] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:14902075601191489230 len:52943 00:08:05.900 [2024-07-25 09:23:18.541155] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:05.900 #14 NEW cov: 12228 ft: 14644 corp: 13/330b lim: 50 exec/s: 0 rss: 72Mb L: 19/47 MS: 1 CopyPart- 00:08:05.900 [2024-07-25 09:23:18.591239] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:14902075601321905870 len:52943 00:08:05.900 [2024-07-25 09:23:18.591263] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:05.900 #15 NEW cov: 12228 ft: 14690 corp: 14/346b lim: 50 exec/s: 0 rss: 72Mb L: 16/47 MS: 1 EraseBytes- 00:08:05.900 [2024-07-25 09:23:18.631412] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:14902075604643794638 len:52943 00:08:05.900 [2024-07-25 09:23:18.631437] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:05.900 #16 NEW cov: 12228 ft: 14702 corp: 15/363b lim: 50 exec/s: 0 rss: 72Mb L: 17/47 MS: 1 ShuffleBytes- 00:08:05.900 [2024-07-25 09:23:18.681631] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:14901901878518271694 len:52943 00:08:05.900 [2024-07-25 09:23:18.681655] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:05.900 [2024-07-25 09:23:18.681696] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:14902075604643794638 len:52943 00:08:05.900 [2024-07-25 09:23:18.681709] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:05.900 #17 NEW cov: 12228 ft: 14769 corp: 16/391b lim: 50 exec/s: 0 rss: 72Mb L: 28/47 MS: 1 InsertByte- 00:08:06.158 [2024-07-25 09:23:18.721895] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:14915921961573469902 len:65536 00:08:06.158 [2024-07-25 09:23:18.721921] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:06.158 [2024-07-25 09:23:18.721962] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 00:08:06.158 [2024-07-25 09:23:18.721974] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:06.158 [2024-07-25 09:23:18.722027] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:14902074763652288206 len:52943 00:08:06.158 [2024-07-25 09:23:18.722040] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:06.158 #18 NEW cov: 12228 ft: 14784 corp: 17/426b lim: 50 exec/s: 0 rss: 72Mb L: 35/47 MS: 1 InsertRepeatedBytes- 00:08:06.158 [2024-07-25 09:23:18.771906] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:7089336938131513954 len:25187 00:08:06.158 [2024-07-25 09:23:18.771930] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:06.158 [2024-07-25 09:23:18.771968] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:7089336938131504738 len:25187 00:08:06.158 [2024-07-25 09:23:18.771982] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:06.158 #19 NEW cov: 12228 ft: 14811 corp: 18/454b lim: 50 exec/s: 0 rss: 72Mb L: 28/47 MS: 1 ChangeByte- 00:08:06.158 [2024-07-25 09:23:18.812153] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:14902075601355460302 len:9992 00:08:06.158 [2024-07-25 09:23:18.812177] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:06.158 [2024-07-25 09:23:18.812227] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:14902075601831012302 len:52943 00:08:06.158 [2024-07-25 09:23:18.812238] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:06.158 [2024-07-25 09:23:18.812290] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:14901860100364750542 len:52943 00:08:06.158 [2024-07-25 09:23:18.812302] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:06.158 NEW_FUNC[1/1]: 0x1a8a050 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:08:06.158 #20 NEW cov: 12251 ft: 14841 corp: 19/486b lim: 50 exec/s: 0 rss: 72Mb L: 32/47 MS: 1 ChangeBit- 00:08:06.158 [2024-07-25 09:23:18.862396] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:14902075601355460302 len:52943 00:08:06.158 [2024-07-25 09:23:18.862422] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:06.158 [2024-07-25 09:23:18.862473] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:14902075604643794638 len:52943 00:08:06.158 [2024-07-25 09:23:18.862486] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:06.158 [2024-07-25 09:23:18.862537] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:14902075604643794638 len:52943 00:08:06.158 [2024-07-25 09:23:18.862551] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:06.158 [2024-07-25 09:23:18.862602] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:14902075604643794638 len:52943 00:08:06.158 [2024-07-25 09:23:18.862615] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:06.158 #21 NEW cov: 12251 ft: 14866 corp: 20/533b lim: 50 exec/s: 0 rss: 72Mb L: 47/47 MS: 1 CrossOver- 00:08:06.158 [2024-07-25 09:23:18.902161] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:0 len:45 00:08:06.158 [2024-07-25 09:23:18.902186] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:06.158 #24 NEW cov: 12251 ft: 14887 corp: 21/544b lim: 50 exec/s: 24 rss: 72Mb L: 11/47 MS: 3 InsertByte-ChangeByte-InsertRepeatedBytes- 00:08:06.158 [2024-07-25 09:23:18.942651] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:14915921961573469902 len:65536 00:08:06.158 [2024-07-25 09:23:18.942676] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:06.158 [2024-07-25 09:23:18.942727] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446595059819216895 len:30841 00:08:06.158 [2024-07-25 09:23:18.942738] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:06.158 [2024-07-25 09:23:18.942794] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:8680969754459535480 len:65536 00:08:06.158 [2024-07-25 09:23:18.942807] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:06.158 [2024-07-25 09:23:18.942861] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:14902074763652288206 len:52943 00:08:06.158 [2024-07-25 09:23:18.942874] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:06.416 #25 NEW cov: 12251 ft: 14909 corp: 22/589b lim: 50 exec/s: 25 rss: 72Mb L: 45/47 MS: 1 InsertRepeatedBytes- 00:08:06.416 [2024-07-25 09:23:18.992682] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:14902075601355460302 len:9992 00:08:06.416 [2024-07-25 09:23:18.992706] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:06.416 [2024-07-25 09:23:18.992753] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:14902075601831012302 len:52943 00:08:06.416 [2024-07-25 09:23:18.992767] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:06.416 [2024-07-25 09:23:18.992819] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:14902075604643741960 len:2767 00:08:06.416 [2024-07-25 09:23:18.992831] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:06.416 #26 NEW cov: 12251 ft: 14940 corp: 23/623b lim: 50 exec/s: 26 rss: 72Mb L: 34/47 MS: 1 PersAutoDict- DE: "\001\010"- 00:08:06.416 [2024-07-25 09:23:19.042677] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:14902075604643794638 len:52944 00:08:06.416 [2024-07-25 09:23:19.042701] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:06.416 [2024-07-25 09:23:19.042739] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:14902075604643794638 len:52943 00:08:06.416 [2024-07-25 09:23:19.042753] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:06.416 #27 NEW cov: 12251 ft: 14951 corp: 24/650b lim: 50 exec/s: 27 rss: 72Mb L: 27/47 MS: 1 ChangeBit- 00:08:06.416 [2024-07-25 09:23:19.082676] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:14844155531115810510 len:52943 00:08:06.416 [2024-07-25 09:23:19.082701] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:06.416 #28 NEW cov: 12251 ft: 15003 corp: 25/669b lim: 50 exec/s: 28 rss: 72Mb L: 19/47 MS: 1 PersAutoDict- DE: "\001\010"- 00:08:06.416 [2024-07-25 09:23:19.123028] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:14845834692372516558 len:65536 00:08:06.416 [2024-07-25 09:23:19.123053] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:06.416 [2024-07-25 09:23:19.123102] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 00:08:06.416 [2024-07-25 09:23:19.123122] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:06.416 [2024-07-25 09:23:19.123174] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:14902074763652288206 len:52943 00:08:06.416 [2024-07-25 09:23:19.123187] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:06.416 #29 NEW cov: 12251 ft: 15010 corp: 26/704b lim: 50 exec/s: 29 rss: 72Mb L: 35/47 MS: 1 ChangeBinInt- 00:08:06.416 [2024-07-25 09:23:19.162889] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:14902075601355407624 len:2767 00:08:06.416 [2024-07-25 09:23:19.162914] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:06.416 #30 NEW cov: 12251 ft: 15030 corp: 27/723b lim: 50 exec/s: 30 rss: 72Mb L: 19/47 MS: 1 CopyPart- 00:08:06.416 [2024-07-25 09:23:19.203308] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:14902075601355420110 len:10024 00:08:06.416 [2024-07-25 09:23:19.203344] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:06.416 [2024-07-25 09:23:19.203397] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:14902075601831012302 len:52943 00:08:06.416 [2024-07-25 09:23:19.203410] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:06.416 [2024-07-25 09:23:19.203460] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:14901860100364750542 len:52943 00:08:06.416 [2024-07-25 09:23:19.203474] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:06.675 #31 NEW cov: 12251 ft: 15076 corp: 28/755b lim: 50 exec/s: 31 rss: 72Mb L: 32/47 MS: 1 ChangeByte- 00:08:06.675 [2024-07-25 09:23:19.243186] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:14902075597060440328 len:52943 00:08:06.675 [2024-07-25 09:23:19.243210] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:06.675 #32 NEW cov: 12251 ft: 15106 corp: 29/774b lim: 50 exec/s: 32 rss: 72Mb L: 19/47 MS: 1 ChangeBinInt- 00:08:06.675 [2024-07-25 09:23:19.283615] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:7089336938131513954 len:25187 00:08:06.675 [2024-07-25 09:23:19.283641] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:06.675 [2024-07-25 09:23:19.283691] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:7089336938131513954 len:25187 00:08:06.675 [2024-07-25 09:23:19.283703] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:06.675 [2024-07-25 09:23:19.283754] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:7089336938131513954 len:25187 00:08:06.675 [2024-07-25 09:23:19.283767] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:06.675 [2024-07-25 09:23:19.283820] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:7089229898956563042 len:25187 00:08:06.675 [2024-07-25 09:23:19.283834] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:06.675 #33 NEW cov: 12251 ft: 15119 corp: 30/818b lim: 50 exec/s: 33 rss: 72Mb L: 44/47 MS: 1 PersAutoDict- DE: "\001\010"- 00:08:06.675 [2024-07-25 09:23:19.333539] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:14902075601355460302 len:52943 00:08:06.675 [2024-07-25 09:23:19.333563] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:06.675 [2024-07-25 09:23:19.333599] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:14902075604639600334 len:52943 00:08:06.675 [2024-07-25 09:23:19.333613] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:06.675 #34 NEW cov: 12251 ft: 15135 corp: 31/845b lim: 50 exec/s: 34 rss: 72Mb L: 27/47 MS: 1 ChangeBit- 00:08:06.675 [2024-07-25 09:23:19.373905] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:14902075601355460302 len:52943 00:08:06.675 [2024-07-25 09:23:19.373929] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:06.676 [2024-07-25 09:23:19.373978] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:14902075601607118542 len:52943 00:08:06.676 [2024-07-25 09:23:19.373990] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:06.676 [2024-07-25 09:23:19.374041] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:14902075604643794638 len:52943 00:08:06.676 [2024-07-25 09:23:19.374054] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:06.676 [2024-07-25 09:23:19.374108] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:14902075604643794638 len:52943 00:08:06.676 [2024-07-25 09:23:19.374122] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:06.676 #35 NEW cov: 12251 ft: 15155 corp: 32/892b lim: 50 exec/s: 35 rss: 72Mb L: 47/47 MS: 1 ChangeByte- 00:08:06.676 [2024-07-25 09:23:19.424016] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:14902075601355460302 len:52943 00:08:06.676 [2024-07-25 09:23:19.424041] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:06.676 [2024-07-25 09:23:19.424091] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:14902075601355420110 len:10024 00:08:06.676 [2024-07-25 09:23:19.424103] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:06.676 [2024-07-25 09:23:19.424174] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:14902075601831012302 len:52943 00:08:06.676 [2024-07-25 09:23:19.424189] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:06.676 [2024-07-25 09:23:19.424244] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:14846906509208506062 len:52943 00:08:06.676 [2024-07-25 09:23:19.424258] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:06.676 #36 NEW cov: 12251 ft: 15158 corp: 33/933b lim: 50 exec/s: 36 rss: 72Mb L: 41/47 MS: 1 CrossOver- 00:08:06.676 [2024-07-25 09:23:19.463769] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:14902075601355407624 len:52943 00:08:06.676 [2024-07-25 09:23:19.463796] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:06.934 #37 NEW cov: 12251 ft: 15184 corp: 34/952b lim: 50 exec/s: 37 rss: 72Mb L: 19/47 MS: 1 CopyPart- 00:08:06.934 [2024-07-25 09:23:19.503902] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:4381666871654010574 len:52943 00:08:06.934 [2024-07-25 09:23:19.503928] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:06.934 #38 NEW cov: 12251 ft: 15187 corp: 35/971b lim: 50 exec/s: 38 rss: 72Mb L: 19/47 MS: 1 ChangeByte- 00:08:06.934 [2024-07-25 09:23:19.544034] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:14902075604643794638 len:52943 00:08:06.934 [2024-07-25 09:23:19.544059] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:06.934 #39 NEW cov: 12251 ft: 15225 corp: 36/988b lim: 50 exec/s: 39 rss: 72Mb L: 17/47 MS: 1 CopyPart- 00:08:06.934 [2024-07-25 09:23:19.584165] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:14902075604643794638 len:52943 00:08:06.934 [2024-07-25 09:23:19.584190] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:06.934 #40 NEW cov: 12251 ft: 15235 corp: 37/1005b lim: 50 exec/s: 40 rss: 72Mb L: 17/47 MS: 1 ChangeBit- 00:08:06.934 [2024-07-25 09:23:19.634418] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:14902075604643794638 len:52943 00:08:06.934 [2024-07-25 09:23:19.634444] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:06.934 [2024-07-25 09:23:19.634481] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:14902075604643794638 len:52943 00:08:06.934 [2024-07-25 09:23:19.634495] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:06.934 #41 NEW cov: 12251 ft: 15302 corp: 38/1032b lim: 50 exec/s: 41 rss: 72Mb L: 27/47 MS: 1 ChangeBit- 00:08:06.935 [2024-07-25 09:23:19.674666] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:14901849316708503246 len:52943 00:08:06.935 [2024-07-25 09:23:19.674691] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:06.935 [2024-07-25 09:23:19.674759] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:2868457397494949671 len:52943 00:08:06.935 [2024-07-25 09:23:19.674772] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:06.935 [2024-07-25 09:23:19.674827] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:14902075604643794638 len:2767 00:08:06.935 [2024-07-25 09:23:19.674840] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:06.935 #42 NEW cov: 12251 ft: 15334 corp: 39/1066b lim: 50 exec/s: 42 rss: 72Mb L: 34/47 MS: 1 CMP- DE: "\001\000"- 00:08:06.935 [2024-07-25 09:23:19.714993] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:14902075601355460302 len:52943 00:08:06.935 [2024-07-25 09:23:19.715018] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:06.935 [2024-07-25 09:23:19.715072] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:14902075604643794457 len:52943 00:08:06.935 [2024-07-25 09:23:19.715083] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:06.935 [2024-07-25 09:23:19.715132] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:14902075604643794638 len:52943 00:08:06.935 [2024-07-25 09:23:19.715146] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:06.935 [2024-07-25 09:23:19.715198] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:14902075604643794638 len:52943 00:08:06.935 [2024-07-25 09:23:19.715212] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:06.935 [2024-07-25 09:23:19.715265] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:4 nsid:0 lba:14902075604643794638 len:52747 00:08:06.935 [2024-07-25 09:23:19.715278] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:08:07.194 #43 NEW cov: 12251 ft: 15386 corp: 40/1116b lim: 50 exec/s: 43 rss: 72Mb L: 50/50 MS: 1 CopyPart- 00:08:07.194 [2024-07-25 09:23:19.764692] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:14844155531115810510 len:52943 00:08:07.194 [2024-07-25 09:23:19.764718] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:07.194 #44 NEW cov: 12251 ft: 15394 corp: 41/1135b lim: 50 exec/s: 44 rss: 72Mb L: 19/50 MS: 1 ShuffleBytes- 00:08:07.194 [2024-07-25 09:23:19.815167] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:14901849316708503246 len:52943 00:08:07.194 [2024-07-25 09:23:19.815193] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:07.194 [2024-07-25 09:23:19.815242] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:2868457397494949671 len:52943 00:08:07.194 [2024-07-25 09:23:19.815256] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:07.194 [2024-07-25 09:23:19.815303] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:2812259540790202062 len:10191 00:08:07.194 [2024-07-25 09:23:19.815317] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:07.194 [2024-07-25 09:23:19.815366] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:14902075604643794638 len:52943 00:08:07.194 [2024-07-25 09:23:19.815379] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:07.194 #45 NEW cov: 12251 ft: 15420 corp: 42/1184b lim: 50 exec/s: 45 rss: 72Mb L: 49/50 MS: 1 CopyPart- 00:08:07.194 [2024-07-25 09:23:19.865026] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:14902075601321905870 len:52943 00:08:07.194 [2024-07-25 09:23:19.865051] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:07.194 [2024-07-25 09:23:19.865095] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:14902129687878618830 len:65536 00:08:07.194 [2024-07-25 09:23:19.865109] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:07.194 #46 NEW cov: 12251 ft: 15426 corp: 43/1208b lim: 50 exec/s: 46 rss: 72Mb L: 24/50 MS: 1 CMP- DE: "\377\377\377\377\377\377\377\003"- 00:08:07.194 [2024-07-25 09:23:19.915480] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:14902074759541817608 len:52943 00:08:07.194 [2024-07-25 09:23:19.915505] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:07.194 [2024-07-25 09:23:19.915553] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:14854884947831738062 len:10024 00:08:07.194 [2024-07-25 09:23:19.915567] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:07.194 [2024-07-25 09:23:19.915616] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:14902075604643794638 len:52943 00:08:07.194 [2024-07-25 09:23:19.915630] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:07.194 [2024-07-25 09:23:19.915679] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:778787173209919182 len:52943 00:08:07.194 [2024-07-25 09:23:19.915691] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:07.194 #47 NEW cov: 12251 ft: 15433 corp: 44/1248b lim: 50 exec/s: 23 rss: 72Mb L: 40/50 MS: 1 CrossOver- 00:08:07.194 #47 DONE cov: 12251 ft: 15433 corp: 44/1248b lim: 50 exec/s: 23 rss: 72Mb 00:08:07.194 ###### Recommended dictionary. ###### 00:08:07.194 "\001\010" # Uses: 3 00:08:07.194 "\001\000" # Uses: 0 00:08:07.194 "\377\377\377\377\377\377\377\003" # Uses: 0 00:08:07.194 ###### End of recommended dictionary. ###### 00:08:07.194 Done 47 runs in 2 second(s) 00:08:07.453 09:23:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_19.conf /var/tmp/suppress_nvmf_fuzz 00:08:07.453 09:23:20 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:07.453 09:23:20 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:07.453 09:23:20 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 20 1 0x1 00:08:07.453 09:23:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=20 00:08:07.453 09:23:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:07.453 09:23:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:07.453 09:23:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 00:08:07.453 09:23:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_20.conf 00:08:07.453 09:23:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:07.453 09:23:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:07.453 09:23:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 20 00:08:07.453 09:23:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4420 00:08:07.453 09:23:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 00:08:07.453 09:23:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4420' 00:08:07.453 09:23:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4420"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:07.453 09:23:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:07.453 09:23:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:07.453 09:23:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4420' -c /tmp/fuzz_json_20.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 -Z 20 00:08:07.453 [2024-07-25 09:23:20.090593] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:07.453 [2024-07-25 09:23:20.090674] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid423812 ] 00:08:07.453 EAL: No free 2048 kB hugepages reported on node 1 00:08:07.453 [2024-07-25 09:23:20.254772] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:07.712 [2024-07-25 09:23:20.320534] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:07.712 [2024-07-25 09:23:20.378796] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:07.712 [2024-07-25 09:23:20.395025] tcp.c:1007:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:08:07.712 INFO: Running with entropic power schedule (0xFF, 100). 00:08:07.712 INFO: Seed: 1391404420 00:08:07.712 INFO: Loaded 1 modules (359061 inline 8-bit counters): 359061 [0x29c768c, 0x2a1f121), 00:08:07.712 INFO: Loaded 1 PC tables (359061 PCs): 359061 [0x2a1f128,0x2f99a78), 00:08:07.712 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 00:08:07.712 INFO: A corpus is not provided, starting from an empty corpus 00:08:07.712 #2 INITED exec/s: 0 rss: 63Mb 00:08:07.712 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:07.712 This may also happen if the target rejected all inputs we tried so far 00:08:07.712 [2024-07-25 09:23:20.440368] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:07.712 [2024-07-25 09:23:20.440399] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:07.712 [2024-07-25 09:23:20.440461] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:07.712 [2024-07-25 09:23:20.440478] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:07.971 NEW_FUNC[1/702]: 0x4a6130 in fuzz_nvm_reservation_acquire_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:597 00:08:07.971 NEW_FUNC[2/702]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:07.971 #7 NEW cov: 12082 ft: 12074 corp: 2/43b lim: 90 exec/s: 0 rss: 71Mb L: 42/42 MS: 5 InsertByte-ChangeBinInt-ChangeBit-ShuffleBytes-InsertRepeatedBytes- 00:08:07.971 [2024-07-25 09:23:20.590727] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:07.971 [2024-07-25 09:23:20.590760] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:07.971 [2024-07-25 09:23:20.590822] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:07.971 [2024-07-25 09:23:20.590838] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:07.971 #8 NEW cov: 12195 ft: 12636 corp: 3/85b lim: 90 exec/s: 0 rss: 71Mb L: 42/42 MS: 1 CrossOver- 00:08:07.971 [2024-07-25 09:23:20.640968] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:07.971 [2024-07-25 09:23:20.640995] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:07.971 [2024-07-25 09:23:20.641047] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:07.971 [2024-07-25 09:23:20.641065] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:07.971 [2024-07-25 09:23:20.641134] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:07.971 [2024-07-25 09:23:20.641152] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:07.971 #9 NEW cov: 12201 ft: 13293 corp: 4/151b lim: 90 exec/s: 0 rss: 71Mb L: 66/66 MS: 1 CrossOver- 00:08:07.971 [2024-07-25 09:23:20.691079] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:07.971 [2024-07-25 09:23:20.691104] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:07.971 [2024-07-25 09:23:20.691156] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:07.971 [2024-07-25 09:23:20.691174] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:07.971 [2024-07-25 09:23:20.691236] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:07.971 [2024-07-25 09:23:20.691254] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:07.971 #10 NEW cov: 12286 ft: 13543 corp: 5/218b lim: 90 exec/s: 0 rss: 72Mb L: 67/67 MS: 1 InsertByte- 00:08:07.971 [2024-07-25 09:23:20.741058] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:07.971 [2024-07-25 09:23:20.741086] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:07.971 [2024-07-25 09:23:20.741141] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:07.971 [2024-07-25 09:23:20.741163] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:07.971 #11 NEW cov: 12286 ft: 13701 corp: 6/260b lim: 90 exec/s: 0 rss: 72Mb L: 42/67 MS: 1 ShuffleBytes- 00:08:08.231 [2024-07-25 09:23:20.781232] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:08.231 [2024-07-25 09:23:20.781258] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:08.231 [2024-07-25 09:23:20.781318] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:08.231 [2024-07-25 09:23:20.781337] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:08.231 #12 NEW cov: 12286 ft: 13772 corp: 7/302b lim: 90 exec/s: 0 rss: 72Mb L: 42/67 MS: 1 ChangeBit- 00:08:08.231 [2024-07-25 09:23:20.831375] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:08.231 [2024-07-25 09:23:20.831401] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:08.231 [2024-07-25 09:23:20.831463] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:08.231 [2024-07-25 09:23:20.831482] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:08.231 #18 NEW cov: 12286 ft: 13912 corp: 8/344b lim: 90 exec/s: 0 rss: 72Mb L: 42/67 MS: 1 ChangeByte- 00:08:08.231 [2024-07-25 09:23:20.871621] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:08.231 [2024-07-25 09:23:20.871647] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:08.231 [2024-07-25 09:23:20.871705] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:08.231 [2024-07-25 09:23:20.871723] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:08.231 [2024-07-25 09:23:20.871785] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:08.231 [2024-07-25 09:23:20.871802] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:08.231 #19 NEW cov: 12286 ft: 13930 corp: 9/410b lim: 90 exec/s: 0 rss: 72Mb L: 66/67 MS: 1 ChangeBit- 00:08:08.231 [2024-07-25 09:23:20.911737] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:08.231 [2024-07-25 09:23:20.911763] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:08.231 [2024-07-25 09:23:20.911812] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:08.231 [2024-07-25 09:23:20.911830] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:08.231 [2024-07-25 09:23:20.911892] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:08.231 [2024-07-25 09:23:20.911910] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:08.231 #25 NEW cov: 12286 ft: 13968 corp: 10/476b lim: 90 exec/s: 0 rss: 72Mb L: 66/67 MS: 1 ChangeByte- 00:08:08.231 [2024-07-25 09:23:20.961839] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:08.231 [2024-07-25 09:23:20.961863] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:08.231 [2024-07-25 09:23:20.961914] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:08.231 [2024-07-25 09:23:20.961932] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:08.231 [2024-07-25 09:23:20.961998] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:08.231 [2024-07-25 09:23:20.962013] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:08.231 #26 NEW cov: 12286 ft: 13999 corp: 11/542b lim: 90 exec/s: 0 rss: 72Mb L: 66/67 MS: 1 ChangeByte- 00:08:08.231 [2024-07-25 09:23:21.001779] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:08.231 [2024-07-25 09:23:21.001805] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:08.231 [2024-07-25 09:23:21.001860] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:08.231 [2024-07-25 09:23:21.001879] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:08.231 #27 NEW cov: 12286 ft: 14030 corp: 12/585b lim: 90 exec/s: 0 rss: 72Mb L: 43/67 MS: 1 InsertByte- 00:08:08.490 [2024-07-25 09:23:21.041971] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:08.490 [2024-07-25 09:23:21.041997] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:08.490 [2024-07-25 09:23:21.042059] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:08.490 [2024-07-25 09:23:21.042081] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:08.490 #28 NEW cov: 12286 ft: 14053 corp: 13/628b lim: 90 exec/s: 0 rss: 72Mb L: 43/67 MS: 1 ChangeByte- 00:08:08.490 [2024-07-25 09:23:21.092254] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:08.490 [2024-07-25 09:23:21.092279] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:08.490 [2024-07-25 09:23:21.092329] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:08.490 [2024-07-25 09:23:21.092345] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:08.490 [2024-07-25 09:23:21.092406] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:08.490 [2024-07-25 09:23:21.092423] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:08.490 #29 NEW cov: 12286 ft: 14126 corp: 14/696b lim: 90 exec/s: 0 rss: 72Mb L: 68/68 MS: 1 CrossOver- 00:08:08.490 [2024-07-25 09:23:21.132337] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:08.490 [2024-07-25 09:23:21.132363] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:08.490 [2024-07-25 09:23:21.132416] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:08.490 [2024-07-25 09:23:21.132434] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:08.490 [2024-07-25 09:23:21.132495] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:08.490 [2024-07-25 09:23:21.132510] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:08.490 #30 NEW cov: 12286 ft: 14150 corp: 15/763b lim: 90 exec/s: 0 rss: 72Mb L: 67/68 MS: 1 InsertByte- 00:08:08.490 [2024-07-25 09:23:21.172603] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:08.490 [2024-07-25 09:23:21.172627] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:08.490 [2024-07-25 09:23:21.172689] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:08.490 [2024-07-25 09:23:21.172707] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:08.490 [2024-07-25 09:23:21.172769] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:08.490 [2024-07-25 09:23:21.172786] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:08.490 [2024-07-25 09:23:21.172865] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:08:08.490 [2024-07-25 09:23:21.172881] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:08.490 #31 NEW cov: 12286 ft: 14506 corp: 16/844b lim: 90 exec/s: 0 rss: 72Mb L: 81/81 MS: 1 CrossOver- 00:08:08.490 [2024-07-25 09:23:21.222427] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:08.490 [2024-07-25 09:23:21.222452] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:08.490 [2024-07-25 09:23:21.222511] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:08.491 [2024-07-25 09:23:21.222529] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:08.491 #32 NEW cov: 12286 ft: 14514 corp: 17/886b lim: 90 exec/s: 0 rss: 72Mb L: 42/81 MS: 1 CopyPart- 00:08:08.491 [2024-07-25 09:23:21.262567] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:08.491 [2024-07-25 09:23:21.262591] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:08.491 [2024-07-25 09:23:21.262650] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:08.491 [2024-07-25 09:23:21.262667] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:08.491 #33 NEW cov: 12286 ft: 14526 corp: 18/929b lim: 90 exec/s: 0 rss: 72Mb L: 43/81 MS: 1 ShuffleBytes- 00:08:08.749 [2024-07-25 09:23:21.302687] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:08.749 [2024-07-25 09:23:21.302712] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:08.749 [2024-07-25 09:23:21.302775] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:08.749 [2024-07-25 09:23:21.302793] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:08.749 NEW_FUNC[1/1]: 0x1a8a050 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:08:08.749 #34 NEW cov: 12309 ft: 14579 corp: 19/977b lim: 90 exec/s: 0 rss: 72Mb L: 48/81 MS: 1 EraseBytes- 00:08:08.749 [2024-07-25 09:23:21.353172] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:08.749 [2024-07-25 09:23:21.353197] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:08.749 [2024-07-25 09:23:21.353251] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:08.749 [2024-07-25 09:23:21.353269] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:08.749 [2024-07-25 09:23:21.353328] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:08.749 [2024-07-25 09:23:21.353344] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:08.749 [2024-07-25 09:23:21.353407] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:08:08.749 [2024-07-25 09:23:21.353423] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:08.749 #35 NEW cov: 12309 ft: 14592 corp: 20/1066b lim: 90 exec/s: 0 rss: 72Mb L: 89/89 MS: 1 InsertRepeatedBytes- 00:08:08.749 [2024-07-25 09:23:21.403105] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:08.749 [2024-07-25 09:23:21.403130] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:08.749 [2024-07-25 09:23:21.403184] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:08.749 [2024-07-25 09:23:21.403203] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:08.749 [2024-07-25 09:23:21.403265] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:08.749 [2024-07-25 09:23:21.403281] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:08.749 #36 NEW cov: 12309 ft: 14607 corp: 21/1134b lim: 90 exec/s: 36 rss: 72Mb L: 68/89 MS: 1 ShuffleBytes- 00:08:08.749 [2024-07-25 09:23:21.453114] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:08.749 [2024-07-25 09:23:21.453139] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:08.749 [2024-07-25 09:23:21.453195] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:08.750 [2024-07-25 09:23:21.453213] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:08.750 #37 NEW cov: 12309 ft: 14613 corp: 22/1180b lim: 90 exec/s: 37 rss: 72Mb L: 46/89 MS: 1 CrossOver- 00:08:08.750 [2024-07-25 09:23:21.493560] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:08.750 [2024-07-25 09:23:21.493585] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:08.750 [2024-07-25 09:23:21.493641] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:08.750 [2024-07-25 09:23:21.493659] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:08.750 [2024-07-25 09:23:21.493721] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:08.750 [2024-07-25 09:23:21.493737] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:08.750 [2024-07-25 09:23:21.493798] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:08:08.750 [2024-07-25 09:23:21.493815] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:08.750 #38 NEW cov: 12309 ft: 14635 corp: 23/1259b lim: 90 exec/s: 38 rss: 72Mb L: 79/89 MS: 1 InsertRepeatedBytes- 00:08:08.750 [2024-07-25 09:23:21.533357] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:08.750 [2024-07-25 09:23:21.533381] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:08.750 [2024-07-25 09:23:21.533442] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:08.750 [2024-07-25 09:23:21.533463] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:08.750 #39 NEW cov: 12309 ft: 14643 corp: 24/1301b lim: 90 exec/s: 39 rss: 72Mb L: 42/89 MS: 1 ChangeBinInt- 00:08:09.009 [2024-07-25 09:23:21.573460] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:09.009 [2024-07-25 09:23:21.573485] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:09.009 [2024-07-25 09:23:21.573546] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:09.009 [2024-07-25 09:23:21.573564] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:09.009 #40 NEW cov: 12309 ft: 14702 corp: 25/1344b lim: 90 exec/s: 40 rss: 73Mb L: 43/89 MS: 1 ShuffleBytes- 00:08:09.009 [2024-07-25 09:23:21.623438] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:09.009 [2024-07-25 09:23:21.623462] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:09.009 #41 NEW cov: 12309 ft: 15529 corp: 26/1379b lim: 90 exec/s: 41 rss: 73Mb L: 35/89 MS: 1 EraseBytes- 00:08:09.009 [2024-07-25 09:23:21.683904] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:09.009 [2024-07-25 09:23:21.683929] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:09.009 [2024-07-25 09:23:21.683981] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:09.009 [2024-07-25 09:23:21.683999] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:09.009 [2024-07-25 09:23:21.684061] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:09.009 [2024-07-25 09:23:21.684081] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:09.009 #42 NEW cov: 12309 ft: 15536 corp: 27/1443b lim: 90 exec/s: 42 rss: 73Mb L: 64/89 MS: 1 EraseBytes- 00:08:09.009 [2024-07-25 09:23:21.733882] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:09.009 [2024-07-25 09:23:21.733907] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:09.009 [2024-07-25 09:23:21.733961] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:09.009 [2024-07-25 09:23:21.733979] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:09.009 #43 NEW cov: 12309 ft: 15547 corp: 28/1480b lim: 90 exec/s: 43 rss: 73Mb L: 37/89 MS: 1 EraseBytes- 00:08:09.009 [2024-07-25 09:23:21.774318] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:09.009 [2024-07-25 09:23:21.774343] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:09.009 [2024-07-25 09:23:21.774399] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:09.009 [2024-07-25 09:23:21.774417] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:09.009 [2024-07-25 09:23:21.774478] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:09.009 [2024-07-25 09:23:21.774496] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:09.009 [2024-07-25 09:23:21.774557] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:08:09.009 [2024-07-25 09:23:21.774573] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:09.009 #44 NEW cov: 12309 ft: 15559 corp: 29/1566b lim: 90 exec/s: 44 rss: 73Mb L: 86/89 MS: 1 CopyPart- 00:08:09.269 [2024-07-25 09:23:21.824166] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:09.269 [2024-07-25 09:23:21.824194] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:09.269 [2024-07-25 09:23:21.824256] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:09.269 [2024-07-25 09:23:21.824275] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:09.269 #45 NEW cov: 12309 ft: 15571 corp: 30/1614b lim: 90 exec/s: 45 rss: 73Mb L: 48/89 MS: 1 ChangeBinInt- 00:08:09.269 [2024-07-25 09:23:21.874443] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:09.269 [2024-07-25 09:23:21.874467] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:09.269 [2024-07-25 09:23:21.874518] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:09.269 [2024-07-25 09:23:21.874535] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:09.269 [2024-07-25 09:23:21.874597] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:09.269 [2024-07-25 09:23:21.874613] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:09.269 #46 NEW cov: 12309 ft: 15582 corp: 31/1671b lim: 90 exec/s: 46 rss: 73Mb L: 57/89 MS: 1 InsertRepeatedBytes- 00:08:09.269 [2024-07-25 09:23:21.914703] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:09.269 [2024-07-25 09:23:21.914728] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:09.269 [2024-07-25 09:23:21.914782] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:09.269 [2024-07-25 09:23:21.914800] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:09.269 [2024-07-25 09:23:21.914862] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:09.269 [2024-07-25 09:23:21.914880] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:09.269 [2024-07-25 09:23:21.914942] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:08:09.269 [2024-07-25 09:23:21.914958] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:09.269 #47 NEW cov: 12309 ft: 15591 corp: 32/1759b lim: 90 exec/s: 47 rss: 74Mb L: 88/89 MS: 1 CrossOver- 00:08:09.269 [2024-07-25 09:23:21.964882] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:09.269 [2024-07-25 09:23:21.964907] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:09.269 [2024-07-25 09:23:21.964959] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:09.269 [2024-07-25 09:23:21.964977] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:09.269 [2024-07-25 09:23:21.965039] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:09.269 [2024-07-25 09:23:21.965056] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:09.269 [2024-07-25 09:23:21.965119] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:08:09.269 [2024-07-25 09:23:21.965135] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:09.269 #48 NEW cov: 12309 ft: 15614 corp: 33/1841b lim: 90 exec/s: 48 rss: 74Mb L: 82/89 MS: 1 InsertByte- 00:08:09.269 [2024-07-25 09:23:22.004661] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:09.269 [2024-07-25 09:23:22.004686] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:09.269 [2024-07-25 09:23:22.004742] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:09.269 [2024-07-25 09:23:22.004761] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:09.269 #49 NEW cov: 12309 ft: 15629 corp: 34/1890b lim: 90 exec/s: 49 rss: 74Mb L: 49/89 MS: 1 InsertByte- 00:08:09.269 [2024-07-25 09:23:22.044960] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:09.269 [2024-07-25 09:23:22.044985] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:09.269 [2024-07-25 09:23:22.045037] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:09.269 [2024-07-25 09:23:22.045056] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:09.269 [2024-07-25 09:23:22.045121] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:09.269 [2024-07-25 09:23:22.045138] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:09.528 #50 NEW cov: 12309 ft: 15649 corp: 35/1958b lim: 90 exec/s: 50 rss: 74Mb L: 68/89 MS: 1 CopyPart- 00:08:09.528 [2024-07-25 09:23:22.095098] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:09.528 [2024-07-25 09:23:22.095123] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:09.528 [2024-07-25 09:23:22.095173] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:09.528 [2024-07-25 09:23:22.095190] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:09.528 [2024-07-25 09:23:22.095250] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:09.528 [2024-07-25 09:23:22.095267] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:09.528 #51 NEW cov: 12309 ft: 15654 corp: 36/2024b lim: 90 exec/s: 51 rss: 74Mb L: 66/89 MS: 1 CrossOver- 00:08:09.528 [2024-07-25 09:23:22.135194] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:09.529 [2024-07-25 09:23:22.135220] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:09.529 [2024-07-25 09:23:22.135275] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:09.529 [2024-07-25 09:23:22.135293] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:09.529 [2024-07-25 09:23:22.135354] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:09.529 [2024-07-25 09:23:22.135372] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:09.529 #52 NEW cov: 12309 ft: 15661 corp: 37/2092b lim: 90 exec/s: 52 rss: 74Mb L: 68/89 MS: 1 CMP- DE: "\344\325\354F_\254\027\000"- 00:08:09.529 [2024-07-25 09:23:22.175184] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:09.529 [2024-07-25 09:23:22.175210] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:09.529 [2024-07-25 09:23:22.175277] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:09.529 [2024-07-25 09:23:22.175295] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:09.529 #53 NEW cov: 12309 ft: 15671 corp: 38/2138b lim: 90 exec/s: 53 rss: 74Mb L: 46/89 MS: 1 ChangeBinInt- 00:08:09.529 [2024-07-25 09:23:22.225453] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:09.529 [2024-07-25 09:23:22.225479] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:09.529 [2024-07-25 09:23:22.225530] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:09.529 [2024-07-25 09:23:22.225547] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:09.529 [2024-07-25 09:23:22.225608] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:09.529 [2024-07-25 09:23:22.225624] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:09.529 #54 NEW cov: 12309 ft: 15700 corp: 39/2195b lim: 90 exec/s: 54 rss: 74Mb L: 57/89 MS: 1 ChangeBinInt- 00:08:09.529 [2024-07-25 09:23:22.275888] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:09.529 [2024-07-25 09:23:22.275913] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:09.529 [2024-07-25 09:23:22.275988] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:09.529 [2024-07-25 09:23:22.276006] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:09.529 [2024-07-25 09:23:22.276067] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:09.529 [2024-07-25 09:23:22.276094] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:09.529 [2024-07-25 09:23:22.276157] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:08:09.529 [2024-07-25 09:23:22.276173] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:09.529 [2024-07-25 09:23:22.276234] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:4 nsid:0 00:08:09.529 [2024-07-25 09:23:22.276250] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:08:09.529 #55 NEW cov: 12309 ft: 15746 corp: 40/2285b lim: 90 exec/s: 55 rss: 74Mb L: 90/90 MS: 1 CopyPart- 00:08:09.529 [2024-07-25 09:23:22.325739] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:09.529 [2024-07-25 09:23:22.325764] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:09.529 [2024-07-25 09:23:22.325819] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:09.529 [2024-07-25 09:23:22.325836] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:09.529 [2024-07-25 09:23:22.325897] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:09.529 [2024-07-25 09:23:22.325912] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:09.788 #56 NEW cov: 12309 ft: 15758 corp: 41/2342b lim: 90 exec/s: 56 rss: 74Mb L: 57/90 MS: 1 ChangeBit- 00:08:09.788 [2024-07-25 09:23:22.375758] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:09.788 [2024-07-25 09:23:22.375784] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:09.788 [2024-07-25 09:23:22.375847] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:09.788 [2024-07-25 09:23:22.375870] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:09.788 #57 NEW cov: 12309 ft: 15769 corp: 42/2384b lim: 90 exec/s: 57 rss: 74Mb L: 42/90 MS: 1 ChangeBinInt- 00:08:09.788 [2024-07-25 09:23:22.416161] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:09.788 [2024-07-25 09:23:22.416187] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:09.788 [2024-07-25 09:23:22.416242] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:09.788 [2024-07-25 09:23:22.416260] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:09.788 [2024-07-25 09:23:22.416321] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:09.788 [2024-07-25 09:23:22.416337] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:09.788 [2024-07-25 09:23:22.416398] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:08:09.788 [2024-07-25 09:23:22.416413] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:09.788 #58 NEW cov: 12309 ft: 15790 corp: 43/2463b lim: 90 exec/s: 29 rss: 74Mb L: 79/90 MS: 1 ShuffleBytes- 00:08:09.788 #58 DONE cov: 12309 ft: 15790 corp: 43/2463b lim: 90 exec/s: 29 rss: 74Mb 00:08:09.788 ###### Recommended dictionary. ###### 00:08:09.788 "\344\325\354F_\254\027\000" # Uses: 0 00:08:09.788 ###### End of recommended dictionary. ###### 00:08:09.788 Done 58 runs in 2 second(s) 00:08:09.788 09:23:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_20.conf /var/tmp/suppress_nvmf_fuzz 00:08:09.788 09:23:22 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:09.788 09:23:22 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:09.788 09:23:22 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 21 1 0x1 00:08:09.788 09:23:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=21 00:08:09.788 09:23:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:09.788 09:23:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:09.788 09:23:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 00:08:09.788 09:23:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_21.conf 00:08:09.788 09:23:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:09.788 09:23:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:09.788 09:23:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 21 00:08:09.788 09:23:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4421 00:08:09.788 09:23:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 00:08:09.788 09:23:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4421' 00:08:09.788 09:23:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4421"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:09.788 09:23:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:09.788 09:23:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:09.788 09:23:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4421' -c /tmp/fuzz_json_21.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 -Z 21 00:08:10.047 [2024-07-25 09:23:22.604422] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:10.047 [2024-07-25 09:23:22.604499] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid424241 ] 00:08:10.047 EAL: No free 2048 kB hugepages reported on node 1 00:08:10.047 [2024-07-25 09:23:22.767736] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:10.047 [2024-07-25 09:23:22.831437] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.306 [2024-07-25 09:23:22.890091] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:10.306 [2024-07-25 09:23:22.906319] tcp.c:1007:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4421 *** 00:08:10.306 INFO: Running with entropic power schedule (0xFF, 100). 00:08:10.306 INFO: Seed: 3904383800 00:08:10.306 INFO: Loaded 1 modules (359061 inline 8-bit counters): 359061 [0x29c768c, 0x2a1f121), 00:08:10.306 INFO: Loaded 1 PC tables (359061 PCs): 359061 [0x2a1f128,0x2f99a78), 00:08:10.306 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 00:08:10.306 INFO: A corpus is not provided, starting from an empty corpus 00:08:10.306 #2 INITED exec/s: 0 rss: 63Mb 00:08:10.306 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:10.306 This may also happen if the target rejected all inputs we tried so far 00:08:10.306 [2024-07-25 09:23:22.951061] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:10.306 [2024-07-25 09:23:22.951101] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:10.306 [2024-07-25 09:23:22.951133] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:10.306 [2024-07-25 09:23:22.951149] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:10.306 [2024-07-25 09:23:22.951181] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:10.306 [2024-07-25 09:23:22.951194] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:10.306 NEW_FUNC[1/702]: 0x4a9350 in fuzz_nvm_reservation_release_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:623 00:08:10.306 NEW_FUNC[2/702]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:10.306 #11 NEW cov: 12057 ft: 12056 corp: 2/34b lim: 50 exec/s: 0 rss: 71Mb L: 33/33 MS: 4 InsertByte-ChangeBit-EraseBytes-InsertRepeatedBytes- 00:08:10.565 [2024-07-25 09:23:23.121461] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:10.565 [2024-07-25 09:23:23.121497] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:10.565 [2024-07-25 09:23:23.121530] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:10.565 [2024-07-25 09:23:23.121546] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:10.565 [2024-07-25 09:23:23.121575] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:10.565 [2024-07-25 09:23:23.121588] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:10.565 #12 NEW cov: 12170 ft: 12652 corp: 3/67b lim: 50 exec/s: 0 rss: 71Mb L: 33/33 MS: 1 ChangeBit- 00:08:10.565 [2024-07-25 09:23:23.201578] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:10.565 [2024-07-25 09:23:23.201612] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:10.565 [2024-07-25 09:23:23.201644] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:10.565 [2024-07-25 09:23:23.201659] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:10.565 [2024-07-25 09:23:23.201694] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:10.565 [2024-07-25 09:23:23.201707] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:10.565 #13 NEW cov: 12176 ft: 12940 corp: 4/100b lim: 50 exec/s: 0 rss: 71Mb L: 33/33 MS: 1 CMP- DE: "\013\000\000\000"- 00:08:10.565 [2024-07-25 09:23:23.261777] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:10.565 [2024-07-25 09:23:23.261804] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:10.565 [2024-07-25 09:23:23.261848] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:10.565 [2024-07-25 09:23:23.261863] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:10.565 [2024-07-25 09:23:23.261890] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:10.565 [2024-07-25 09:23:23.261904] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:10.565 [2024-07-25 09:23:23.261930] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:10.565 [2024-07-25 09:23:23.261944] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:10.565 #14 NEW cov: 12261 ft: 13530 corp: 5/144b lim: 50 exec/s: 0 rss: 72Mb L: 44/44 MS: 1 InsertRepeatedBytes- 00:08:10.565 [2024-07-25 09:23:23.351881] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:10.565 [2024-07-25 09:23:23.351907] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:10.565 [2024-07-25 09:23:23.351953] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:10.565 [2024-07-25 09:23:23.351976] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:10.823 #15 NEW cov: 12261 ft: 13944 corp: 6/173b lim: 50 exec/s: 0 rss: 72Mb L: 29/44 MS: 1 EraseBytes- 00:08:10.823 [2024-07-25 09:23:23.432165] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:10.823 [2024-07-25 09:23:23.432192] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:10.823 [2024-07-25 09:23:23.432236] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:10.823 [2024-07-25 09:23:23.432250] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:10.823 [2024-07-25 09:23:23.432278] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:10.823 [2024-07-25 09:23:23.432292] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:10.823 #16 NEW cov: 12261 ft: 14024 corp: 7/206b lim: 50 exec/s: 0 rss: 72Mb L: 33/44 MS: 1 ChangeBit- 00:08:10.823 [2024-07-25 09:23:23.482401] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:10.823 [2024-07-25 09:23:23.482431] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:10.823 [2024-07-25 09:23:23.482467] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:10.823 [2024-07-25 09:23:23.482483] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:10.823 [2024-07-25 09:23:23.482510] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:10.823 [2024-07-25 09:23:23.482524] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:10.823 [2024-07-25 09:23:23.482551] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:10.823 [2024-07-25 09:23:23.482565] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:10.823 #17 NEW cov: 12261 ft: 14123 corp: 8/246b lim: 50 exec/s: 0 rss: 72Mb L: 40/44 MS: 1 CopyPart- 00:08:10.823 [2024-07-25 09:23:23.572481] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:10.823 [2024-07-25 09:23:23.572509] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:10.823 [2024-07-25 09:23:23.572555] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:10.823 [2024-07-25 09:23:23.572570] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:11.081 #18 NEW cov: 12261 ft: 14192 corp: 9/275b lim: 50 exec/s: 0 rss: 72Mb L: 29/44 MS: 1 ChangeBit- 00:08:11.081 [2024-07-25 09:23:23.652774] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:11.081 [2024-07-25 09:23:23.652801] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:11.081 [2024-07-25 09:23:23.652845] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:11.081 [2024-07-25 09:23:23.652860] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:11.081 [2024-07-25 09:23:23.652887] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:11.081 [2024-07-25 09:23:23.652901] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:11.081 [2024-07-25 09:23:23.652926] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:11.081 [2024-07-25 09:23:23.652940] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:11.081 #19 NEW cov: 12261 ft: 14234 corp: 10/319b lim: 50 exec/s: 0 rss: 72Mb L: 44/44 MS: 1 ShuffleBytes- 00:08:11.082 [2024-07-25 09:23:23.743004] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:11.082 [2024-07-25 09:23:23.743031] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:11.082 [2024-07-25 09:23:23.743082] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:11.082 [2024-07-25 09:23:23.743101] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:11.082 [2024-07-25 09:23:23.743128] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:11.082 [2024-07-25 09:23:23.743141] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:11.082 [2024-07-25 09:23:23.743183] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:11.082 [2024-07-25 09:23:23.743201] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:11.082 #20 NEW cov: 12261 ft: 14274 corp: 11/365b lim: 50 exec/s: 0 rss: 72Mb L: 46/46 MS: 1 CMP- DE: "\013\000"- 00:08:11.082 [2024-07-25 09:23:23.823188] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:11.082 [2024-07-25 09:23:23.823215] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:11.082 [2024-07-25 09:23:23.823245] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:11.082 [2024-07-25 09:23:23.823259] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:11.082 [2024-07-25 09:23:23.823287] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:11.082 [2024-07-25 09:23:23.823301] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:11.082 NEW_FUNC[1/1]: 0x1a8a050 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:08:11.082 #21 NEW cov: 12284 ft: 14314 corp: 12/398b lim: 50 exec/s: 0 rss: 72Mb L: 33/46 MS: 1 CMP- DE: "\002\000\000\000\000\000\000\000"- 00:08:11.082 [2024-07-25 09:23:23.883418] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:11.082 [2024-07-25 09:23:23.883446] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:11.082 [2024-07-25 09:23:23.883475] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:11.082 [2024-07-25 09:23:23.883490] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:11.082 [2024-07-25 09:23:23.883517] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:11.082 [2024-07-25 09:23:23.883531] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:11.082 [2024-07-25 09:23:23.883556] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:11.082 [2024-07-25 09:23:23.883570] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:11.340 #22 NEW cov: 12284 ft: 14365 corp: 13/442b lim: 50 exec/s: 0 rss: 72Mb L: 44/46 MS: 1 ShuffleBytes- 00:08:11.340 [2024-07-25 09:23:23.933398] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:11.340 [2024-07-25 09:23:23.933424] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:11.340 [2024-07-25 09:23:23.933468] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:11.340 [2024-07-25 09:23:23.933483] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:11.340 #23 NEW cov: 12284 ft: 14394 corp: 14/471b lim: 50 exec/s: 23 rss: 72Mb L: 29/46 MS: 1 EraseBytes- 00:08:11.340 [2024-07-25 09:23:24.013712] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:11.340 [2024-07-25 09:23:24.013737] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:11.340 [2024-07-25 09:23:24.013781] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:11.340 [2024-07-25 09:23:24.013795] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:11.340 [2024-07-25 09:23:24.013822] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:11.340 [2024-07-25 09:23:24.013839] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:11.340 [2024-07-25 09:23:24.013865] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:11.340 [2024-07-25 09:23:24.013879] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:11.340 #24 NEW cov: 12284 ft: 14422 corp: 15/513b lim: 50 exec/s: 24 rss: 72Mb L: 42/46 MS: 1 CMP- DE: "\177\000"- 00:08:11.340 [2024-07-25 09:23:24.093835] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:11.340 [2024-07-25 09:23:24.093863] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:11.340 [2024-07-25 09:23:24.093908] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:11.340 [2024-07-25 09:23:24.093923] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:11.340 #25 NEW cov: 12284 ft: 14434 corp: 16/533b lim: 50 exec/s: 25 rss: 72Mb L: 20/46 MS: 1 EraseBytes- 00:08:11.340 [2024-07-25 09:23:24.144060] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:11.340 [2024-07-25 09:23:24.144095] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:11.340 [2024-07-25 09:23:24.144125] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:11.340 [2024-07-25 09:23:24.144140] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:11.340 [2024-07-25 09:23:24.144175] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:11.340 [2024-07-25 09:23:24.144189] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:11.599 #26 NEW cov: 12284 ft: 14454 corp: 17/568b lim: 50 exec/s: 26 rss: 72Mb L: 35/46 MS: 1 CMP- DE: "\002\000"- 00:08:11.599 [2024-07-25 09:23:24.204203] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:11.599 [2024-07-25 09:23:24.204230] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:11.599 [2024-07-25 09:23:24.204275] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:11.599 [2024-07-25 09:23:24.204290] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:11.599 [2024-07-25 09:23:24.204319] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:11.599 [2024-07-25 09:23:24.204343] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:11.599 #27 NEW cov: 12284 ft: 14473 corp: 18/603b lim: 50 exec/s: 27 rss: 72Mb L: 35/46 MS: 1 ChangeBit- 00:08:11.599 [2024-07-25 09:23:24.284409] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:11.599 [2024-07-25 09:23:24.284436] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:11.599 [2024-07-25 09:23:24.284465] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:11.599 [2024-07-25 09:23:24.284480] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:11.599 [2024-07-25 09:23:24.284507] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:11.599 [2024-07-25 09:23:24.284520] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:11.599 #28 NEW cov: 12284 ft: 14534 corp: 19/633b lim: 50 exec/s: 28 rss: 72Mb L: 30/46 MS: 1 CrossOver- 00:08:11.599 [2024-07-25 09:23:24.344616] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:11.599 [2024-07-25 09:23:24.344643] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:11.599 [2024-07-25 09:23:24.344673] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:11.599 [2024-07-25 09:23:24.344688] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:11.599 [2024-07-25 09:23:24.344715] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:11.599 [2024-07-25 09:23:24.344728] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:11.599 [2024-07-25 09:23:24.344754] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:11.599 [2024-07-25 09:23:24.344767] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:11.599 #29 NEW cov: 12284 ft: 14606 corp: 20/677b lim: 50 exec/s: 29 rss: 72Mb L: 44/46 MS: 1 CopyPart- 00:08:11.599 [2024-07-25 09:23:24.404670] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:11.599 [2024-07-25 09:23:24.404698] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:11.599 [2024-07-25 09:23:24.404729] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:11.599 [2024-07-25 09:23:24.404744] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:11.858 #30 NEW cov: 12284 ft: 14627 corp: 21/705b lim: 50 exec/s: 30 rss: 72Mb L: 28/46 MS: 1 EraseBytes- 00:08:11.858 [2024-07-25 09:23:24.464789] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:11.858 [2024-07-25 09:23:24.464816] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:11.858 [2024-07-25 09:23:24.464860] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:11.858 [2024-07-25 09:23:24.464884] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:11.858 #31 NEW cov: 12284 ft: 14640 corp: 22/733b lim: 50 exec/s: 31 rss: 72Mb L: 28/46 MS: 1 CMP- DE: "\000\027\254`j\236\201\326"- 00:08:11.858 [2024-07-25 09:23:24.545040] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:11.858 [2024-07-25 09:23:24.545066] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:11.858 [2024-07-25 09:23:24.545117] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:11.858 [2024-07-25 09:23:24.545131] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:11.858 [2024-07-25 09:23:24.545159] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:11.858 [2024-07-25 09:23:24.545187] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:11.858 #32 NEW cov: 12284 ft: 14659 corp: 23/766b lim: 50 exec/s: 32 rss: 72Mb L: 33/46 MS: 1 ChangeBit- 00:08:11.858 [2024-07-25 09:23:24.605225] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:11.858 [2024-07-25 09:23:24.605252] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:11.858 [2024-07-25 09:23:24.605286] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:11.858 [2024-07-25 09:23:24.605301] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:11.858 [2024-07-25 09:23:24.605334] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:11.858 [2024-07-25 09:23:24.605347] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:11.858 #33 NEW cov: 12284 ft: 14692 corp: 24/799b lim: 50 exec/s: 33 rss: 72Mb L: 33/46 MS: 1 ChangeBinInt- 00:08:11.858 [2024-07-25 09:23:24.655416] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:11.858 [2024-07-25 09:23:24.655444] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:11.858 [2024-07-25 09:23:24.655474] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:11.858 [2024-07-25 09:23:24.655488] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:11.858 [2024-07-25 09:23:24.655515] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:11.858 [2024-07-25 09:23:24.655528] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:11.858 [2024-07-25 09:23:24.655554] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:11.858 [2024-07-25 09:23:24.655568] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:12.117 #34 NEW cov: 12284 ft: 14701 corp: 25/843b lim: 50 exec/s: 34 rss: 72Mb L: 44/46 MS: 1 ChangeByte- 00:08:12.117 [2024-07-25 09:23:24.705549] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:12.117 [2024-07-25 09:23:24.705577] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:12.117 [2024-07-25 09:23:24.705622] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:12.117 [2024-07-25 09:23:24.705637] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:12.117 [2024-07-25 09:23:24.705665] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:12.117 [2024-07-25 09:23:24.705679] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:12.117 [2024-07-25 09:23:24.705705] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:12.117 [2024-07-25 09:23:24.705719] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:12.117 #35 NEW cov: 12284 ft: 14748 corp: 26/887b lim: 50 exec/s: 35 rss: 72Mb L: 44/46 MS: 1 ChangeBit- 00:08:12.117 [2024-07-25 09:23:24.795719] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:12.117 [2024-07-25 09:23:24.795745] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:12.117 [2024-07-25 09:23:24.795789] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:12.117 [2024-07-25 09:23:24.795804] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:12.117 [2024-07-25 09:23:24.795831] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:12.117 [2024-07-25 09:23:24.795845] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:12.117 #36 NEW cov: 12284 ft: 14757 corp: 27/921b lim: 50 exec/s: 36 rss: 72Mb L: 34/46 MS: 1 InsertByte- 00:08:12.117 [2024-07-25 09:23:24.845837] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:12.117 [2024-07-25 09:23:24.845865] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:12.117 [2024-07-25 09:23:24.845911] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:12.117 [2024-07-25 09:23:24.845934] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:12.117 [2024-07-25 09:23:24.845964] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:12.117 [2024-07-25 09:23:24.845979] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:12.117 [2024-07-25 09:23:24.895960] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:12.117 [2024-07-25 09:23:24.895987] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:12.117 [2024-07-25 09:23:24.896032] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:12.117 [2024-07-25 09:23:24.896056] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:12.117 [2024-07-25 09:23:24.896090] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:12.117 [2024-07-25 09:23:24.896104] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:12.376 #38 NEW cov: 12284 ft: 14800 corp: 28/954b lim: 50 exec/s: 38 rss: 72Mb L: 33/46 MS: 2 ChangeBit-ChangeByte- 00:08:12.376 [2024-07-25 09:23:24.946094] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:12.376 [2024-07-25 09:23:24.946121] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:12.376 [2024-07-25 09:23:24.946166] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:12.376 [2024-07-25 09:23:24.946181] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:12.376 [2024-07-25 09:23:24.946209] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:12.376 [2024-07-25 09:23:24.946222] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:12.376 #39 NEW cov: 12284 ft: 14808 corp: 29/988b lim: 50 exec/s: 19 rss: 73Mb L: 34/46 MS: 1 CopyPart- 00:08:12.376 #39 DONE cov: 12284 ft: 14808 corp: 29/988b lim: 50 exec/s: 19 rss: 73Mb 00:08:12.376 ###### Recommended dictionary. ###### 00:08:12.376 "\013\000\000\000" # Uses: 0 00:08:12.376 "\013\000" # Uses: 0 00:08:12.376 "\002\000\000\000\000\000\000\000" # Uses: 0 00:08:12.376 "\177\000" # Uses: 0 00:08:12.376 "\002\000" # Uses: 0 00:08:12.376 "\000\027\254`j\236\201\326" # Uses: 0 00:08:12.376 ###### End of recommended dictionary. ###### 00:08:12.376 Done 39 runs in 2 second(s) 00:08:12.376 09:23:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_21.conf /var/tmp/suppress_nvmf_fuzz 00:08:12.376 09:23:25 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:12.376 09:23:25 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:12.376 09:23:25 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 22 1 0x1 00:08:12.376 09:23:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=22 00:08:12.376 09:23:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:12.376 09:23:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:12.376 09:23:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 00:08:12.376 09:23:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_22.conf 00:08:12.376 09:23:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:12.376 09:23:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:12.376 09:23:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 22 00:08:12.376 09:23:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4422 00:08:12.377 09:23:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 00:08:12.377 09:23:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4422' 00:08:12.377 09:23:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4422"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:12.377 09:23:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:12.377 09:23:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:12.377 09:23:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4422' -c /tmp/fuzz_json_22.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 -Z 22 00:08:12.377 [2024-07-25 09:23:25.161910] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:12.377 [2024-07-25 09:23:25.161994] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid424677 ] 00:08:12.635 EAL: No free 2048 kB hugepages reported on node 1 00:08:12.635 [2024-07-25 09:23:25.333253] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:12.635 [2024-07-25 09:23:25.397219] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:12.895 [2024-07-25 09:23:25.455487] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:12.895 [2024-07-25 09:23:25.471700] tcp.c:1007:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4422 *** 00:08:12.895 INFO: Running with entropic power schedule (0xFF, 100). 00:08:12.895 INFO: Seed: 2172416835 00:08:12.895 INFO: Loaded 1 modules (359061 inline 8-bit counters): 359061 [0x29c768c, 0x2a1f121), 00:08:12.895 INFO: Loaded 1 PC tables (359061 PCs): 359061 [0x2a1f128,0x2f99a78), 00:08:12.895 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 00:08:12.895 INFO: A corpus is not provided, starting from an empty corpus 00:08:12.895 #2 INITED exec/s: 0 rss: 64Mb 00:08:12.895 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:12.895 This may also happen if the target rejected all inputs we tried so far 00:08:12.895 [2024-07-25 09:23:25.520619] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:12.895 [2024-07-25 09:23:25.520645] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:12.895 [2024-07-25 09:23:25.520685] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:12.895 [2024-07-25 09:23:25.520697] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:12.895 [2024-07-25 09:23:25.520746] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:12.895 [2024-07-25 09:23:25.520759] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:12.895 NEW_FUNC[1/701]: 0x4ab610 in fuzz_nvm_reservation_register_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:644 00:08:12.895 NEW_FUNC[2/701]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:12.895 #4 NEW cov: 12078 ft: 12077 corp: 2/66b lim: 85 exec/s: 0 rss: 71Mb L: 65/65 MS: 2 ShuffleBytes-InsertRepeatedBytes- 00:08:12.895 [2024-07-25 09:23:25.670884] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:12.895 [2024-07-25 09:23:25.670916] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:12.895 [2024-07-25 09:23:25.670969] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:12.895 [2024-07-25 09:23:25.670984] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:13.154 NEW_FUNC[1/1]: 0x163e300 in nvme_ctrlr_get_ready_timeout /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_ctrlr.c:1288 00:08:13.154 #5 NEW cov: 12196 ft: 12904 corp: 3/106b lim: 85 exec/s: 0 rss: 71Mb L: 40/65 MS: 1 EraseBytes- 00:08:13.154 [2024-07-25 09:23:25.731168] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:13.154 [2024-07-25 09:23:25.731195] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:13.154 [2024-07-25 09:23:25.731250] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:13.154 [2024-07-25 09:23:25.731264] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:13.154 [2024-07-25 09:23:25.731315] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:13.154 [2024-07-25 09:23:25.731331] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:13.154 #6 NEW cov: 12202 ft: 13171 corp: 4/171b lim: 85 exec/s: 0 rss: 71Mb L: 65/65 MS: 1 ChangeByte- 00:08:13.154 [2024-07-25 09:23:25.770970] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:13.154 [2024-07-25 09:23:25.770996] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:13.154 #10 NEW cov: 12287 ft: 14193 corp: 5/194b lim: 85 exec/s: 0 rss: 71Mb L: 23/65 MS: 4 CrossOver-ChangeByte-InsertByte-InsertRepeatedBytes- 00:08:13.154 [2024-07-25 09:23:25.811272] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:13.154 [2024-07-25 09:23:25.811296] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:13.154 [2024-07-25 09:23:25.811333] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:13.154 [2024-07-25 09:23:25.811346] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:13.154 #16 NEW cov: 12287 ft: 14352 corp: 6/233b lim: 85 exec/s: 0 rss: 72Mb L: 39/65 MS: 1 EraseBytes- 00:08:13.154 [2024-07-25 09:23:25.861682] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:13.154 [2024-07-25 09:23:25.861707] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:13.154 [2024-07-25 09:23:25.861760] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:13.154 [2024-07-25 09:23:25.861774] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:13.154 [2024-07-25 09:23:25.861824] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:13.154 [2024-07-25 09:23:25.861837] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:13.154 [2024-07-25 09:23:25.861890] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:08:13.154 [2024-07-25 09:23:25.861903] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:13.154 #17 NEW cov: 12287 ft: 14742 corp: 7/307b lim: 85 exec/s: 0 rss: 72Mb L: 74/74 MS: 1 InsertRepeatedBytes- 00:08:13.154 [2024-07-25 09:23:25.901822] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:13.154 [2024-07-25 09:23:25.901846] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:13.154 [2024-07-25 09:23:25.901899] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:13.154 [2024-07-25 09:23:25.901913] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:13.154 [2024-07-25 09:23:25.901967] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:13.154 [2024-07-25 09:23:25.901981] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:13.154 [2024-07-25 09:23:25.902034] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:08:13.154 [2024-07-25 09:23:25.902048] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:13.154 #18 NEW cov: 12287 ft: 14795 corp: 8/379b lim: 85 exec/s: 0 rss: 72Mb L: 72/74 MS: 1 CrossOver- 00:08:13.154 [2024-07-25 09:23:25.951978] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:13.154 [2024-07-25 09:23:25.952003] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:13.154 [2024-07-25 09:23:25.952053] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:13.154 [2024-07-25 09:23:25.952067] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:13.154 [2024-07-25 09:23:25.952125] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:13.154 [2024-07-25 09:23:25.952137] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:13.154 [2024-07-25 09:23:25.952191] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:08:13.154 [2024-07-25 09:23:25.952204] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:13.414 #19 NEW cov: 12287 ft: 14836 corp: 9/456b lim: 85 exec/s: 0 rss: 72Mb L: 77/77 MS: 1 CrossOver- 00:08:13.414 [2024-07-25 09:23:26.001941] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:13.414 [2024-07-25 09:23:26.001965] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:13.414 [2024-07-25 09:23:26.002001] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:13.414 [2024-07-25 09:23:26.002013] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:13.414 [2024-07-25 09:23:26.002067] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:13.414 [2024-07-25 09:23:26.002083] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:13.414 #20 NEW cov: 12287 ft: 14975 corp: 10/521b lim: 85 exec/s: 0 rss: 72Mb L: 65/77 MS: 1 ChangeBit- 00:08:13.414 [2024-07-25 09:23:26.041935] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:13.414 [2024-07-25 09:23:26.041962] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:13.414 [2024-07-25 09:23:26.042009] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:13.414 [2024-07-25 09:23:26.042022] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:13.414 #21 NEW cov: 12287 ft: 15039 corp: 11/560b lim: 85 exec/s: 0 rss: 72Mb L: 39/77 MS: 1 ShuffleBytes- 00:08:13.414 [2024-07-25 09:23:26.092387] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:13.414 [2024-07-25 09:23:26.092410] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:13.414 [2024-07-25 09:23:26.092457] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:13.414 [2024-07-25 09:23:26.092470] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:13.414 [2024-07-25 09:23:26.092522] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:13.414 [2024-07-25 09:23:26.092535] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:13.414 [2024-07-25 09:23:26.092588] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:08:13.414 [2024-07-25 09:23:26.092601] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:13.414 #22 NEW cov: 12287 ft: 15081 corp: 12/630b lim: 85 exec/s: 0 rss: 72Mb L: 70/77 MS: 1 CrossOver- 00:08:13.414 [2024-07-25 09:23:26.132507] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:13.414 [2024-07-25 09:23:26.132532] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:13.414 [2024-07-25 09:23:26.132588] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:13.414 [2024-07-25 09:23:26.132602] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:13.414 [2024-07-25 09:23:26.132658] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:13.414 [2024-07-25 09:23:26.132670] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:13.414 [2024-07-25 09:23:26.132725] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:08:13.414 [2024-07-25 09:23:26.132740] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:13.414 #23 NEW cov: 12287 ft: 15094 corp: 13/703b lim: 85 exec/s: 0 rss: 72Mb L: 73/77 MS: 1 CMP- DE: "\035\017\3469a\254\027\000"- 00:08:13.414 [2024-07-25 09:23:26.172486] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:13.414 [2024-07-25 09:23:26.172511] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:13.414 [2024-07-25 09:23:26.172563] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:13.414 [2024-07-25 09:23:26.172574] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:13.414 [2024-07-25 09:23:26.172626] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:13.414 [2024-07-25 09:23:26.172639] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:13.414 #24 NEW cov: 12287 ft: 15127 corp: 14/768b lim: 85 exec/s: 0 rss: 72Mb L: 65/77 MS: 1 ChangeByte- 00:08:13.673 [2024-07-25 09:23:26.222770] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:13.673 [2024-07-25 09:23:26.222796] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:13.673 [2024-07-25 09:23:26.222851] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:13.673 [2024-07-25 09:23:26.222864] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:13.673 [2024-07-25 09:23:26.222918] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:13.674 [2024-07-25 09:23:26.222931] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:13.674 [2024-07-25 09:23:26.222986] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:08:13.674 [2024-07-25 09:23:26.222999] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:13.674 #25 NEW cov: 12287 ft: 15189 corp: 15/840b lim: 85 exec/s: 0 rss: 72Mb L: 72/77 MS: 1 InsertRepeatedBytes- 00:08:13.674 [2024-07-25 09:23:26.262865] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:13.674 [2024-07-25 09:23:26.262891] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:13.674 [2024-07-25 09:23:26.262944] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:13.674 [2024-07-25 09:23:26.262958] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:13.674 [2024-07-25 09:23:26.263010] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:13.674 [2024-07-25 09:23:26.263024] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:13.674 [2024-07-25 09:23:26.263078] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:08:13.674 [2024-07-25 09:23:26.263092] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:13.674 #26 NEW cov: 12287 ft: 15213 corp: 16/914b lim: 85 exec/s: 0 rss: 72Mb L: 74/77 MS: 1 ChangeByte- 00:08:13.674 [2024-07-25 09:23:26.322746] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:13.674 [2024-07-25 09:23:26.322771] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:13.674 [2024-07-25 09:23:26.322811] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:13.674 [2024-07-25 09:23:26.322825] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:13.674 #27 NEW cov: 12287 ft: 15255 corp: 17/953b lim: 85 exec/s: 0 rss: 72Mb L: 39/77 MS: 1 CMP- DE: "\377\377\377\377\377\377\377\377"- 00:08:13.674 [2024-07-25 09:23:26.372864] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:13.674 [2024-07-25 09:23:26.372888] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:13.674 [2024-07-25 09:23:26.372926] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:13.674 [2024-07-25 09:23:26.372941] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:13.674 #28 NEW cov: 12287 ft: 15263 corp: 18/988b lim: 85 exec/s: 0 rss: 72Mb L: 35/77 MS: 1 EraseBytes- 00:08:13.674 [2024-07-25 09:23:26.413115] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:13.674 [2024-07-25 09:23:26.413140] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:13.674 [2024-07-25 09:23:26.413193] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:13.674 [2024-07-25 09:23:26.413205] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:13.674 [2024-07-25 09:23:26.413261] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:13.674 [2024-07-25 09:23:26.413275] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:13.674 NEW_FUNC[1/1]: 0x1a8a050 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:08:13.674 #34 NEW cov: 12310 ft: 15365 corp: 19/1049b lim: 85 exec/s: 0 rss: 72Mb L: 61/77 MS: 1 CopyPart- 00:08:13.674 [2024-07-25 09:23:26.463269] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:13.674 [2024-07-25 09:23:26.463293] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:13.674 [2024-07-25 09:23:26.463342] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:13.674 [2024-07-25 09:23:26.463356] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:13.674 [2024-07-25 09:23:26.463408] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:13.674 [2024-07-25 09:23:26.463422] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:13.933 #35 NEW cov: 12310 ft: 15389 corp: 20/1114b lim: 85 exec/s: 0 rss: 72Mb L: 65/77 MS: 1 ShuffleBytes- 00:08:13.933 [2024-07-25 09:23:26.513419] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:13.933 [2024-07-25 09:23:26.513443] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:13.933 [2024-07-25 09:23:26.513494] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:13.933 [2024-07-25 09:23:26.513507] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:13.933 [2024-07-25 09:23:26.513560] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:13.933 [2024-07-25 09:23:26.513573] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:13.933 #36 NEW cov: 12310 ft: 15412 corp: 21/1179b lim: 85 exec/s: 36 rss: 72Mb L: 65/77 MS: 1 ChangeByte- 00:08:13.933 [2024-07-25 09:23:26.553390] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:13.933 [2024-07-25 09:23:26.553414] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:13.933 [2024-07-25 09:23:26.553453] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:13.933 [2024-07-25 09:23:26.553467] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:13.933 #37 NEW cov: 12310 ft: 15430 corp: 22/1218b lim: 85 exec/s: 37 rss: 72Mb L: 39/77 MS: 1 CrossOver- 00:08:13.933 [2024-07-25 09:23:26.593710] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:13.933 [2024-07-25 09:23:26.593734] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:13.933 [2024-07-25 09:23:26.593770] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:13.933 [2024-07-25 09:23:26.593786] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:13.933 [2024-07-25 09:23:26.593841] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:13.934 [2024-07-25 09:23:26.593854] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:13.934 #38 NEW cov: 12310 ft: 15470 corp: 23/1284b lim: 85 exec/s: 38 rss: 72Mb L: 66/77 MS: 1 InsertByte- 00:08:13.934 [2024-07-25 09:23:26.633925] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:13.934 [2024-07-25 09:23:26.633949] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:13.934 [2024-07-25 09:23:26.634001] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:13.934 [2024-07-25 09:23:26.634015] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:13.934 [2024-07-25 09:23:26.634072] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:13.934 [2024-07-25 09:23:26.634085] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:13.934 [2024-07-25 09:23:26.634138] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:08:13.934 [2024-07-25 09:23:26.634151] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:13.934 #39 NEW cov: 12310 ft: 15532 corp: 24/1365b lim: 85 exec/s: 39 rss: 72Mb L: 81/81 MS: 1 InsertRepeatedBytes- 00:08:13.934 [2024-07-25 09:23:26.683919] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:13.934 [2024-07-25 09:23:26.683943] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:13.934 [2024-07-25 09:23:26.683991] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:13.934 [2024-07-25 09:23:26.684003] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:13.934 [2024-07-25 09:23:26.684056] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:13.934 [2024-07-25 09:23:26.684073] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:13.934 #40 NEW cov: 12310 ft: 15603 corp: 25/1430b lim: 85 exec/s: 40 rss: 72Mb L: 65/81 MS: 1 ChangeByte- 00:08:13.934 [2024-07-25 09:23:26.734085] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:13.934 [2024-07-25 09:23:26.734109] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:13.934 [2024-07-25 09:23:26.734165] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:13.934 [2024-07-25 09:23:26.734178] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:13.934 [2024-07-25 09:23:26.734230] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:13.934 [2024-07-25 09:23:26.734243] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:14.193 #41 NEW cov: 12310 ft: 15626 corp: 26/1495b lim: 85 exec/s: 41 rss: 72Mb L: 65/81 MS: 1 ChangeBinInt- 00:08:14.193 [2024-07-25 09:23:26.784217] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:14.193 [2024-07-25 09:23:26.784245] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:14.193 [2024-07-25 09:23:26.784286] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:14.193 [2024-07-25 09:23:26.784298] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:14.193 [2024-07-25 09:23:26.784348] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:14.193 [2024-07-25 09:23:26.784361] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:14.193 #42 NEW cov: 12310 ft: 15631 corp: 27/1562b lim: 85 exec/s: 42 rss: 72Mb L: 67/81 MS: 1 CMP- DE: "\000\000"- 00:08:14.193 [2024-07-25 09:23:26.824499] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:14.193 [2024-07-25 09:23:26.824523] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:14.193 [2024-07-25 09:23:26.824573] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:14.193 [2024-07-25 09:23:26.824586] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:14.193 [2024-07-25 09:23:26.824635] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:14.193 [2024-07-25 09:23:26.824648] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:14.193 [2024-07-25 09:23:26.824698] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:08:14.193 [2024-07-25 09:23:26.824711] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:14.193 #43 NEW cov: 12310 ft: 15640 corp: 28/1642b lim: 85 exec/s: 43 rss: 72Mb L: 80/81 MS: 1 InsertRepeatedBytes- 00:08:14.193 [2024-07-25 09:23:26.874504] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:14.193 [2024-07-25 09:23:26.874529] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:14.193 [2024-07-25 09:23:26.874580] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:14.193 [2024-07-25 09:23:26.874593] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:14.193 [2024-07-25 09:23:26.874644] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:14.193 [2024-07-25 09:23:26.874659] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:14.193 #44 NEW cov: 12310 ft: 15645 corp: 29/1707b lim: 85 exec/s: 44 rss: 73Mb L: 65/81 MS: 1 PersAutoDict- DE: "\035\017\3469a\254\027\000"- 00:08:14.193 [2024-07-25 09:23:26.914433] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:14.193 [2024-07-25 09:23:26.914457] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:14.193 [2024-07-25 09:23:26.914496] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:14.193 [2024-07-25 09:23:26.914509] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:14.193 #45 NEW cov: 12310 ft: 15667 corp: 30/1746b lim: 85 exec/s: 45 rss: 73Mb L: 39/81 MS: 1 ChangeBit- 00:08:14.193 [2024-07-25 09:23:26.964613] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:14.193 [2024-07-25 09:23:26.964636] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:14.193 [2024-07-25 09:23:26.964676] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:14.193 [2024-07-25 09:23:26.964689] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:14.452 #46 NEW cov: 12310 ft: 15680 corp: 31/1796b lim: 85 exec/s: 46 rss: 73Mb L: 50/81 MS: 1 InsertRepeatedBytes- 00:08:14.452 [2024-07-25 09:23:27.015046] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:14.452 [2024-07-25 09:23:27.015072] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:14.452 [2024-07-25 09:23:27.015118] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:14.452 [2024-07-25 09:23:27.015128] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:14.452 [2024-07-25 09:23:27.015181] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:14.452 [2024-07-25 09:23:27.015194] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:14.452 [2024-07-25 09:23:27.015248] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:08:14.452 [2024-07-25 09:23:27.015262] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:14.452 #47 NEW cov: 12310 ft: 15698 corp: 32/1880b lim: 85 exec/s: 47 rss: 73Mb L: 84/84 MS: 1 InsertRepeatedBytes- 00:08:14.452 [2024-07-25 09:23:27.065210] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:14.452 [2024-07-25 09:23:27.065234] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:14.452 [2024-07-25 09:23:27.065297] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:14.452 [2024-07-25 09:23:27.065307] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:14.452 [2024-07-25 09:23:27.065361] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:14.452 [2024-07-25 09:23:27.065375] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:14.452 [2024-07-25 09:23:27.065428] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:08:14.452 [2024-07-25 09:23:27.065441] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:14.452 #48 NEW cov: 12310 ft: 15702 corp: 33/1954b lim: 85 exec/s: 48 rss: 73Mb L: 74/84 MS: 1 ChangeByte- 00:08:14.452 [2024-07-25 09:23:27.105322] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:14.452 [2024-07-25 09:23:27.105346] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:14.452 [2024-07-25 09:23:27.105398] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:14.452 [2024-07-25 09:23:27.105411] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:14.452 [2024-07-25 09:23:27.105462] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:14.452 [2024-07-25 09:23:27.105476] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:14.452 [2024-07-25 09:23:27.105528] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:08:14.452 [2024-07-25 09:23:27.105544] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:14.452 #49 NEW cov: 12310 ft: 15710 corp: 34/2026b lim: 85 exec/s: 49 rss: 73Mb L: 72/84 MS: 1 ChangeByte- 00:08:14.452 [2024-07-25 09:23:27.145450] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:14.452 [2024-07-25 09:23:27.145474] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:14.452 [2024-07-25 09:23:27.145520] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:14.452 [2024-07-25 09:23:27.145530] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:14.452 [2024-07-25 09:23:27.145579] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:14.452 [2024-07-25 09:23:27.145592] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:14.452 [2024-07-25 09:23:27.145644] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:08:14.452 [2024-07-25 09:23:27.145658] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:14.452 #50 NEW cov: 12310 ft: 15723 corp: 35/2100b lim: 85 exec/s: 50 rss: 73Mb L: 74/84 MS: 1 ChangeByte- 00:08:14.452 [2024-07-25 09:23:27.185562] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:14.452 [2024-07-25 09:23:27.185585] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:14.452 [2024-07-25 09:23:27.185637] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:14.452 [2024-07-25 09:23:27.185650] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:14.452 [2024-07-25 09:23:27.185702] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:14.452 [2024-07-25 09:23:27.185715] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:14.452 [2024-07-25 09:23:27.185766] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:08:14.452 [2024-07-25 09:23:27.185779] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:14.452 #51 NEW cov: 12310 ft: 15730 corp: 36/2172b lim: 85 exec/s: 51 rss: 73Mb L: 72/84 MS: 1 ChangeBit- 00:08:14.452 [2024-07-25 09:23:27.225360] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:14.452 [2024-07-25 09:23:27.225383] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:14.453 [2024-07-25 09:23:27.225425] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:14.453 [2024-07-25 09:23:27.225439] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:14.453 #52 NEW cov: 12310 ft: 15771 corp: 37/2211b lim: 85 exec/s: 52 rss: 73Mb L: 39/84 MS: 1 CopyPart- 00:08:14.712 [2024-07-25 09:23:27.265663] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:14.712 [2024-07-25 09:23:27.265688] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:14.712 [2024-07-25 09:23:27.265741] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:14.712 [2024-07-25 09:23:27.265755] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:14.712 [2024-07-25 09:23:27.265813] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:14.712 [2024-07-25 09:23:27.265827] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:14.712 #53 NEW cov: 12310 ft: 15817 corp: 38/2272b lim: 85 exec/s: 53 rss: 73Mb L: 61/84 MS: 1 CopyPart- 00:08:14.712 [2024-07-25 09:23:27.315611] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:14.712 [2024-07-25 09:23:27.315634] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:14.712 [2024-07-25 09:23:27.315689] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:14.712 [2024-07-25 09:23:27.315702] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:14.712 #54 NEW cov: 12310 ft: 15839 corp: 39/2311b lim: 85 exec/s: 54 rss: 73Mb L: 39/84 MS: 1 ShuffleBytes- 00:08:14.712 [2024-07-25 09:23:27.355871] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:14.712 [2024-07-25 09:23:27.355894] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:14.712 [2024-07-25 09:23:27.355944] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:14.712 [2024-07-25 09:23:27.355956] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:14.712 [2024-07-25 09:23:27.356005] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:14.712 [2024-07-25 09:23:27.356018] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:14.712 #55 NEW cov: 12310 ft: 15859 corp: 40/2378b lim: 85 exec/s: 55 rss: 74Mb L: 67/84 MS: 1 ShuffleBytes- 00:08:14.712 [2024-07-25 09:23:27.406377] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:14.712 [2024-07-25 09:23:27.406402] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:14.712 [2024-07-25 09:23:27.406451] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:14.712 [2024-07-25 09:23:27.406461] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:14.712 [2024-07-25 09:23:27.406510] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:14.712 [2024-07-25 09:23:27.406523] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:14.712 [2024-07-25 09:23:27.406574] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:08:14.712 [2024-07-25 09:23:27.406587] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:14.712 [2024-07-25 09:23:27.406639] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:4 nsid:0 00:08:14.712 [2024-07-25 09:23:27.406652] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:08:14.712 #56 NEW cov: 12310 ft: 15920 corp: 41/2463b lim: 85 exec/s: 56 rss: 74Mb L: 85/85 MS: 1 InsertRepeatedBytes- 00:08:14.712 [2024-07-25 09:23:27.456086] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:14.712 [2024-07-25 09:23:27.456111] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:14.712 [2024-07-25 09:23:27.456163] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:14.712 [2024-07-25 09:23:27.456179] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:14.712 [2024-07-25 09:23:27.456232] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:14.712 [2024-07-25 09:23:27.456246] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:14.712 #57 NEW cov: 12310 ft: 15922 corp: 42/2528b lim: 85 exec/s: 57 rss: 74Mb L: 65/85 MS: 1 ShuffleBytes- 00:08:14.712 [2024-07-25 09:23:27.496080] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:14.712 [2024-07-25 09:23:27.496105] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:14.712 [2024-07-25 09:23:27.496145] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:14.712 [2024-07-25 09:23:27.496159] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:14.712 #58 NEW cov: 12310 ft: 15939 corp: 43/2573b lim: 85 exec/s: 29 rss: 74Mb L: 45/85 MS: 1 EraseBytes- 00:08:14.712 #58 DONE cov: 12310 ft: 15939 corp: 43/2573b lim: 85 exec/s: 29 rss: 74Mb 00:08:14.712 ###### Recommended dictionary. ###### 00:08:14.712 "\035\017\3469a\254\027\000" # Uses: 1 00:08:14.712 "\377\377\377\377\377\377\377\377" # Uses: 0 00:08:14.712 "\000\000" # Uses: 0 00:08:14.712 ###### End of recommended dictionary. ###### 00:08:14.712 Done 58 runs in 2 second(s) 00:08:14.971 09:23:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_22.conf /var/tmp/suppress_nvmf_fuzz 00:08:14.971 09:23:27 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:14.971 09:23:27 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:14.971 09:23:27 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 23 1 0x1 00:08:14.971 09:23:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=23 00:08:14.971 09:23:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:14.971 09:23:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:14.971 09:23:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 00:08:14.971 09:23:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_23.conf 00:08:14.971 09:23:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:14.971 09:23:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:14.971 09:23:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 23 00:08:14.971 09:23:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4423 00:08:14.971 09:23:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 00:08:14.971 09:23:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4423' 00:08:14.971 09:23:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4423"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:14.971 09:23:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:14.971 09:23:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:14.971 09:23:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4423' -c /tmp/fuzz_json_23.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 -Z 23 00:08:14.971 [2024-07-25 09:23:27.667267] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:14.971 [2024-07-25 09:23:27.667345] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid424993 ] 00:08:14.971 EAL: No free 2048 kB hugepages reported on node 1 00:08:15.230 [2024-07-25 09:23:27.834098] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:15.230 [2024-07-25 09:23:27.898538] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:15.230 [2024-07-25 09:23:27.956806] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:15.230 [2024-07-25 09:23:27.973034] tcp.c:1007:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4423 *** 00:08:15.230 INFO: Running with entropic power schedule (0xFF, 100). 00:08:15.230 INFO: Seed: 380450458 00:08:15.230 INFO: Loaded 1 modules (359061 inline 8-bit counters): 359061 [0x29c768c, 0x2a1f121), 00:08:15.230 INFO: Loaded 1 PC tables (359061 PCs): 359061 [0x2a1f128,0x2f99a78), 00:08:15.230 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 00:08:15.230 INFO: A corpus is not provided, starting from an empty corpus 00:08:15.230 #2 INITED exec/s: 0 rss: 63Mb 00:08:15.230 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:15.230 This may also happen if the target rejected all inputs we tried so far 00:08:15.230 [2024-07-25 09:23:28.017787] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:15.230 [2024-07-25 09:23:28.017819] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:15.230 [2024-07-25 09:23:28.017849] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:15.230 [2024-07-25 09:23:28.017864] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:15.230 [2024-07-25 09:23:28.017891] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:15.230 [2024-07-25 09:23:28.017905] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:15.230 [2024-07-25 09:23:28.017931] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:08:15.231 [2024-07-25 09:23:28.017944] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:15.489 NEW_FUNC[1/701]: 0x4ae840 in fuzz_nvm_reservation_report_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:671 00:08:15.489 NEW_FUNC[2/701]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:15.489 #11 NEW cov: 12016 ft: 12010 corp: 2/22b lim: 25 exec/s: 0 rss: 70Mb L: 21/21 MS: 4 ChangeBit-InsertByte-InsertByte-InsertRepeatedBytes- 00:08:15.489 [2024-07-25 09:23:28.188179] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:15.489 [2024-07-25 09:23:28.188215] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:15.489 [2024-07-25 09:23:28.188247] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:15.489 [2024-07-25 09:23:28.188262] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:15.489 [2024-07-25 09:23:28.188297] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:15.489 [2024-07-25 09:23:28.188311] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:15.489 [2024-07-25 09:23:28.188336] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:08:15.489 [2024-07-25 09:23:28.188349] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:15.489 #17 NEW cov: 12129 ft: 12567 corp: 3/43b lim: 25 exec/s: 0 rss: 70Mb L: 21/21 MS: 1 ShuffleBytes- 00:08:15.489 [2024-07-25 09:23:28.268321] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:15.489 [2024-07-25 09:23:28.268351] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:15.489 [2024-07-25 09:23:28.268381] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:15.489 [2024-07-25 09:23:28.268396] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:15.490 [2024-07-25 09:23:28.268425] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:15.490 [2024-07-25 09:23:28.268438] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:15.490 [2024-07-25 09:23:28.268465] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:08:15.490 [2024-07-25 09:23:28.268478] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:15.748 #18 NEW cov: 12135 ft: 12833 corp: 4/64b lim: 25 exec/s: 0 rss: 70Mb L: 21/21 MS: 1 ChangeByte- 00:08:15.748 [2024-07-25 09:23:28.328310] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:15.748 [2024-07-25 09:23:28.328337] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:15.748 #23 NEW cov: 12220 ft: 13827 corp: 5/69b lim: 25 exec/s: 0 rss: 70Mb L: 5/21 MS: 5 InsertRepeatedBytes-CopyPart-ShuffleBytes-ChangeBinInt-InsertByte- 00:08:15.748 [2024-07-25 09:23:28.388560] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:15.748 [2024-07-25 09:23:28.388587] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:15.748 [2024-07-25 09:23:28.388631] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:15.748 [2024-07-25 09:23:28.388645] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:15.748 [2024-07-25 09:23:28.388672] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:15.748 [2024-07-25 09:23:28.388686] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:15.748 [2024-07-25 09:23:28.388711] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:08:15.748 [2024-07-25 09:23:28.388724] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:15.748 #24 NEW cov: 12220 ft: 13929 corp: 6/90b lim: 25 exec/s: 0 rss: 71Mb L: 21/21 MS: 1 ChangeBinInt- 00:08:15.748 [2024-07-25 09:23:28.468794] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:15.748 [2024-07-25 09:23:28.468822] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:15.748 [2024-07-25 09:23:28.468852] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:15.748 [2024-07-25 09:23:28.468866] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:15.748 [2024-07-25 09:23:28.468894] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:15.748 [2024-07-25 09:23:28.468907] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:15.748 [2024-07-25 09:23:28.468933] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:08:15.748 [2024-07-25 09:23:28.468950] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:15.748 #25 NEW cov: 12220 ft: 13993 corp: 7/113b lim: 25 exec/s: 0 rss: 71Mb L: 23/23 MS: 1 CrossOver- 00:08:15.748 [2024-07-25 09:23:28.549029] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:15.748 [2024-07-25 09:23:28.549057] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:15.749 [2024-07-25 09:23:28.549097] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:15.749 [2024-07-25 09:23:28.549112] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:15.749 [2024-07-25 09:23:28.549140] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:15.749 [2024-07-25 09:23:28.549153] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:15.749 [2024-07-25 09:23:28.549194] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:08:15.749 [2024-07-25 09:23:28.549208] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:16.008 #26 NEW cov: 12220 ft: 14046 corp: 8/134b lim: 25 exec/s: 0 rss: 71Mb L: 21/23 MS: 1 CopyPart- 00:08:16.008 [2024-07-25 09:23:28.629222] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:16.008 [2024-07-25 09:23:28.629249] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:16.008 [2024-07-25 09:23:28.629278] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:16.008 [2024-07-25 09:23:28.629293] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:16.008 [2024-07-25 09:23:28.629320] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:16.008 [2024-07-25 09:23:28.629333] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:16.008 [2024-07-25 09:23:28.629358] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:08:16.008 [2024-07-25 09:23:28.629371] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:16.008 #27 NEW cov: 12220 ft: 14067 corp: 9/156b lim: 25 exec/s: 0 rss: 71Mb L: 22/23 MS: 1 EraseBytes- 00:08:16.008 [2024-07-25 09:23:28.709438] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:16.008 [2024-07-25 09:23:28.709466] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:16.008 [2024-07-25 09:23:28.709495] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:16.008 [2024-07-25 09:23:28.709509] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:16.008 [2024-07-25 09:23:28.709537] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:16.008 [2024-07-25 09:23:28.709550] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:16.008 [2024-07-25 09:23:28.709576] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:08:16.008 [2024-07-25 09:23:28.709589] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:16.008 #28 NEW cov: 12220 ft: 14108 corp: 10/180b lim: 25 exec/s: 0 rss: 71Mb L: 24/24 MS: 1 InsertRepeatedBytes- 00:08:16.008 [2024-07-25 09:23:28.759510] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:16.008 [2024-07-25 09:23:28.759536] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:16.008 [2024-07-25 09:23:28.759580] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:16.008 [2024-07-25 09:23:28.759594] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:16.008 [2024-07-25 09:23:28.759624] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:16.008 [2024-07-25 09:23:28.759638] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:16.008 [2024-07-25 09:23:28.759663] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:08:16.008 [2024-07-25 09:23:28.759676] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:16.008 #29 NEW cov: 12220 ft: 14139 corp: 11/201b lim: 25 exec/s: 0 rss: 71Mb L: 21/24 MS: 1 ChangeByte- 00:08:16.008 [2024-07-25 09:23:28.809669] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:16.008 [2024-07-25 09:23:28.809696] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:16.008 [2024-07-25 09:23:28.809739] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:16.008 [2024-07-25 09:23:28.809753] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:16.008 [2024-07-25 09:23:28.809782] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:16.008 [2024-07-25 09:23:28.809795] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:16.008 [2024-07-25 09:23:28.809820] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:08:16.008 [2024-07-25 09:23:28.809833] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:16.267 #30 NEW cov: 12220 ft: 14151 corp: 12/223b lim: 25 exec/s: 0 rss: 71Mb L: 22/24 MS: 1 InsertByte- 00:08:16.267 [2024-07-25 09:23:28.859802] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:16.267 [2024-07-25 09:23:28.859830] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:16.267 [2024-07-25 09:23:28.859876] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:16.267 [2024-07-25 09:23:28.859891] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:16.267 [2024-07-25 09:23:28.859918] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:16.267 [2024-07-25 09:23:28.859932] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:16.267 [2024-07-25 09:23:28.859958] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:08:16.267 [2024-07-25 09:23:28.859972] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:16.267 NEW_FUNC[1/1]: 0x1a8a050 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:08:16.267 #31 NEW cov: 12243 ft: 14188 corp: 13/244b lim: 25 exec/s: 0 rss: 71Mb L: 21/24 MS: 1 ChangeByte- 00:08:16.267 [2024-07-25 09:23:28.939890] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:16.267 [2024-07-25 09:23:28.939917] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:16.267 #32 NEW cov: 12243 ft: 14225 corp: 14/252b lim: 25 exec/s: 32 rss: 72Mb L: 8/24 MS: 1 CopyPart- 00:08:16.267 [2024-07-25 09:23:29.030131] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:16.267 [2024-07-25 09:23:29.030160] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:16.267 #33 NEW cov: 12243 ft: 14242 corp: 15/260b lim: 25 exec/s: 33 rss: 72Mb L: 8/24 MS: 1 InsertRepeatedBytes- 00:08:16.526 [2024-07-25 09:23:29.080401] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:16.526 [2024-07-25 09:23:29.080429] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:16.526 [2024-07-25 09:23:29.080459] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:16.526 [2024-07-25 09:23:29.080472] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:16.526 [2024-07-25 09:23:29.080499] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:16.526 [2024-07-25 09:23:29.080513] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:16.526 [2024-07-25 09:23:29.080553] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:08:16.526 [2024-07-25 09:23:29.080567] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:16.526 #34 NEW cov: 12243 ft: 14326 corp: 16/284b lim: 25 exec/s: 34 rss: 72Mb L: 24/24 MS: 1 CopyPart- 00:08:16.526 [2024-07-25 09:23:29.160549] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:16.526 [2024-07-25 09:23:29.160576] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:16.526 [2024-07-25 09:23:29.160621] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:16.526 [2024-07-25 09:23:29.160635] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:16.526 #35 NEW cov: 12243 ft: 14629 corp: 17/294b lim: 25 exec/s: 35 rss: 72Mb L: 10/24 MS: 1 CrossOver- 00:08:16.526 [2024-07-25 09:23:29.240728] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:16.526 [2024-07-25 09:23:29.240755] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:16.526 [2024-07-25 09:23:29.240799] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:16.526 [2024-07-25 09:23:29.240814] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:16.526 #36 NEW cov: 12243 ft: 14676 corp: 18/305b lim: 25 exec/s: 36 rss: 72Mb L: 11/24 MS: 1 InsertByte- 00:08:16.526 [2024-07-25 09:23:29.320916] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:16.526 [2024-07-25 09:23:29.320943] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:16.784 #37 NEW cov: 12243 ft: 14724 corp: 19/310b lim: 25 exec/s: 37 rss: 72Mb L: 5/24 MS: 1 ShuffleBytes- 00:08:16.784 [2024-07-25 09:23:29.381061] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:16.784 [2024-07-25 09:23:29.381093] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:16.784 [2024-07-25 09:23:29.381137] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:16.784 [2024-07-25 09:23:29.381155] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:16.784 #38 NEW cov: 12243 ft: 14767 corp: 20/322b lim: 25 exec/s: 38 rss: 72Mb L: 12/24 MS: 1 EraseBytes- 00:08:16.784 [2024-07-25 09:23:29.441250] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:16.784 [2024-07-25 09:23:29.441278] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:16.784 [2024-07-25 09:23:29.441308] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:16.784 [2024-07-25 09:23:29.441323] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:16.784 #39 NEW cov: 12243 ft: 14801 corp: 21/334b lim: 25 exec/s: 39 rss: 72Mb L: 12/24 MS: 1 ChangeBinInt- 00:08:16.784 [2024-07-25 09:23:29.521639] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:16.784 [2024-07-25 09:23:29.521666] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:16.784 [2024-07-25 09:23:29.521696] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:16.784 [2024-07-25 09:23:29.521710] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:16.784 [2024-07-25 09:23:29.521737] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:16.784 [2024-07-25 09:23:29.521750] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:16.784 [2024-07-25 09:23:29.521775] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:08:16.784 [2024-07-25 09:23:29.521787] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:16.784 [2024-07-25 09:23:29.521813] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:4 nsid:0 00:08:16.784 [2024-07-25 09:23:29.521825] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:08:16.784 #45 NEW cov: 12243 ft: 14845 corp: 22/359b lim: 25 exec/s: 45 rss: 72Mb L: 25/25 MS: 1 InsertRepeatedBytes- 00:08:16.784 [2024-07-25 09:23:29.571774] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:16.784 [2024-07-25 09:23:29.571801] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:16.784 [2024-07-25 09:23:29.571831] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:16.784 [2024-07-25 09:23:29.571845] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:16.784 [2024-07-25 09:23:29.571872] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:16.784 [2024-07-25 09:23:29.571885] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:16.784 [2024-07-25 09:23:29.571910] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:08:16.785 [2024-07-25 09:23:29.571923] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:16.785 [2024-07-25 09:23:29.571949] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:4 nsid:0 00:08:16.785 [2024-07-25 09:23:29.571962] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:08:17.044 #46 NEW cov: 12243 ft: 14908 corp: 23/384b lim: 25 exec/s: 46 rss: 72Mb L: 25/25 MS: 1 CrossOver- 00:08:17.044 [2024-07-25 09:23:29.651758] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:17.044 [2024-07-25 09:23:29.651784] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:17.044 #47 NEW cov: 12243 ft: 14917 corp: 24/389b lim: 25 exec/s: 47 rss: 72Mb L: 5/25 MS: 1 ShuffleBytes- 00:08:17.044 [2024-07-25 09:23:29.732099] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:17.044 [2024-07-25 09:23:29.732126] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:17.044 [2024-07-25 09:23:29.732170] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:17.044 [2024-07-25 09:23:29.732184] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:17.044 [2024-07-25 09:23:29.732211] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:17.044 [2024-07-25 09:23:29.732224] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:17.044 [2024-07-25 09:23:29.732250] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:08:17.044 [2024-07-25 09:23:29.732263] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:17.044 #48 NEW cov: 12243 ft: 14924 corp: 25/411b lim: 25 exec/s: 48 rss: 72Mb L: 22/25 MS: 1 ChangeBit- 00:08:17.044 [2024-07-25 09:23:29.812326] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:17.044 [2024-07-25 09:23:29.812353] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:17.044 [2024-07-25 09:23:29.812397] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:17.044 [2024-07-25 09:23:29.812411] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:17.044 [2024-07-25 09:23:29.812440] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:17.044 [2024-07-25 09:23:29.812453] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:17.044 [2024-07-25 09:23:29.812478] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:08:17.044 [2024-07-25 09:23:29.812491] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:17.044 #49 NEW cov: 12243 ft: 14926 corp: 26/432b lim: 25 exec/s: 49 rss: 72Mb L: 21/25 MS: 1 ChangeBit- 00:08:17.302 [2024-07-25 09:23:29.862388] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:17.302 [2024-07-25 09:23:29.862414] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:17.302 [2024-07-25 09:23:29.862458] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:17.302 [2024-07-25 09:23:29.862473] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:17.302 [2024-07-25 09:23:29.862501] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:17.302 [2024-07-25 09:23:29.862513] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:17.302 #50 NEW cov: 12243 ft: 15123 corp: 27/450b lim: 25 exec/s: 50 rss: 72Mb L: 18/25 MS: 1 EraseBytes- 00:08:17.302 [2024-07-25 09:23:29.942663] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:17.302 [2024-07-25 09:23:29.942695] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:17.302 [2024-07-25 09:23:29.942725] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:17.302 [2024-07-25 09:23:29.942739] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:17.302 [2024-07-25 09:23:29.942768] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:17.302 [2024-07-25 09:23:29.942781] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:17.302 #51 NEW cov: 12243 ft: 15130 corp: 28/468b lim: 25 exec/s: 25 rss: 72Mb L: 18/25 MS: 1 CMP- DE: "\000\000\000\000\000\000\000\000"- 00:08:17.302 #51 DONE cov: 12243 ft: 15130 corp: 28/468b lim: 25 exec/s: 25 rss: 72Mb 00:08:17.302 ###### Recommended dictionary. ###### 00:08:17.302 "\000\000\000\000\000\000\000\000" # Uses: 0 00:08:17.302 ###### End of recommended dictionary. ###### 00:08:17.302 Done 51 runs in 2 second(s) 00:08:17.560 09:23:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_23.conf /var/tmp/suppress_nvmf_fuzz 00:08:17.560 09:23:30 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:17.560 09:23:30 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:17.560 09:23:30 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 24 1 0x1 00:08:17.560 09:23:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=24 00:08:17.560 09:23:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:17.560 09:23:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:17.560 09:23:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 00:08:17.560 09:23:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_24.conf 00:08:17.560 09:23:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:17.560 09:23:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:17.560 09:23:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 24 00:08:17.560 09:23:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4424 00:08:17.560 09:23:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 00:08:17.561 09:23:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4424' 00:08:17.561 09:23:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4424"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:17.561 09:23:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:17.561 09:23:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:17.561 09:23:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4424' -c /tmp/fuzz_json_24.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 -Z 24 00:08:17.561 [2024-07-25 09:23:30.167791] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:17.561 [2024-07-25 09:23:30.167860] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid425347 ] 00:08:17.561 EAL: No free 2048 kB hugepages reported on node 1 00:08:17.561 [2024-07-25 09:23:30.338050] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:17.819 [2024-07-25 09:23:30.405374] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:17.819 [2024-07-25 09:23:30.464243] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:17.819 [2024-07-25 09:23:30.480476] tcp.c:1007:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4424 *** 00:08:17.819 INFO: Running with entropic power schedule (0xFF, 100). 00:08:17.819 INFO: Seed: 2888467233 00:08:17.819 INFO: Loaded 1 modules (359061 inline 8-bit counters): 359061 [0x29c768c, 0x2a1f121), 00:08:17.819 INFO: Loaded 1 PC tables (359061 PCs): 359061 [0x2a1f128,0x2f99a78), 00:08:17.819 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 00:08:17.819 INFO: A corpus is not provided, starting from an empty corpus 00:08:17.819 #2 INITED exec/s: 0 rss: 63Mb 00:08:17.819 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:17.819 This may also happen if the target rejected all inputs we tried so far 00:08:17.819 [2024-07-25 09:23:30.548143] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744072568700927 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:17.819 [2024-07-25 09:23:30.548181] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:17.819 [2024-07-25 09:23:30.548242] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:17.819 [2024-07-25 09:23:30.548261] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:17.819 [2024-07-25 09:23:30.548318] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:17.819 [2024-07-25 09:23:30.548336] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:17.819 [2024-07-25 09:23:30.548426] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:17.819 [2024-07-25 09:23:30.548446] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:18.077 NEW_FUNC[1/702]: 0x4af920 in fuzz_nvm_compare_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:685 00:08:18.077 NEW_FUNC[2/702]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:18.077 #15 NEW cov: 12083 ft: 12089 corp: 2/81b lim: 100 exec/s: 0 rss: 70Mb L: 80/80 MS: 3 ChangeByte-ShuffleBytes-InsertRepeatedBytes- 00:08:18.077 [2024-07-25 09:23:30.718114] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744072568700927 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.077 [2024-07-25 09:23:30.718151] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:18.077 [2024-07-25 09:23:30.718238] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.077 [2024-07-25 09:23:30.718256] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:18.077 #26 NEW cov: 12201 ft: 13013 corp: 3/121b lim: 100 exec/s: 0 rss: 71Mb L: 40/80 MS: 1 EraseBytes- 00:08:18.077 [2024-07-25 09:23:30.788467] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:5859554002803446097 len:20818 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.077 [2024-07-25 09:23:30.788497] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:18.077 [2024-07-25 09:23:30.788569] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:5859553999884210513 len:20818 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.077 [2024-07-25 09:23:30.788585] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:18.077 #29 NEW cov: 12207 ft: 13368 corp: 4/175b lim: 100 exec/s: 0 rss: 71Mb L: 54/80 MS: 3 CopyPart-CrossOver-InsertRepeatedBytes- 00:08:18.077 [2024-07-25 09:23:30.838319] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.077 [2024-07-25 09:23:30.838347] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:18.077 #32 NEW cov: 12292 ft: 14374 corp: 5/195b lim: 100 exec/s: 0 rss: 71Mb L: 20/80 MS: 3 ChangeBit-ChangeByte-InsertRepeatedBytes- 00:08:18.335 [2024-07-25 09:23:30.888790] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:5859554002803446097 len:20818 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.335 [2024-07-25 09:23:30.888819] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:18.335 [2024-07-25 09:23:30.888885] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:12587190072258941359 len:45650 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.335 [2024-07-25 09:23:30.888906] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:18.335 #33 NEW cov: 12292 ft: 14441 corp: 6/249b lim: 100 exec/s: 0 rss: 71Mb L: 54/80 MS: 1 ChangeBinInt- 00:08:18.335 [2024-07-25 09:23:30.949040] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:5859554002803446097 len:20818 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.335 [2024-07-25 09:23:30.949066] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:18.335 [2024-07-25 09:23:30.949145] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:10281347063045247407 len:45650 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.335 [2024-07-25 09:23:30.949164] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:18.335 #34 NEW cov: 12292 ft: 14508 corp: 7/303b lim: 100 exec/s: 0 rss: 71Mb L: 54/80 MS: 1 ChangeBit- 00:08:18.335 [2024-07-25 09:23:31.009123] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.335 [2024-07-25 09:23:31.009151] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:18.335 #35 NEW cov: 12292 ft: 14543 corp: 8/327b lim: 100 exec/s: 0 rss: 72Mb L: 24/80 MS: 1 CMP- DE: "\017\000\000\000"- 00:08:18.335 [2024-07-25 09:23:31.079775] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:5859554002803446097 len:20818 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.335 [2024-07-25 09:23:31.079803] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:18.335 [2024-07-25 09:23:31.079865] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:10281347063045247407 len:45650 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.335 [2024-07-25 09:23:31.079882] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:18.335 #36 NEW cov: 12292 ft: 14633 corp: 9/381b lim: 100 exec/s: 0 rss: 72Mb L: 54/80 MS: 1 ShuffleBytes- 00:08:18.335 [2024-07-25 09:23:31.140830] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744072568700927 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.335 [2024-07-25 09:23:31.140856] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:18.335 [2024-07-25 09:23:31.140958] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.335 [2024-07-25 09:23:31.140974] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:18.335 [2024-07-25 09:23:31.141054] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.335 [2024-07-25 09:23:31.141075] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:18.335 [2024-07-25 09:23:31.141172] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.335 [2024-07-25 09:23:31.141189] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:18.593 #37 NEW cov: 12292 ft: 14665 corp: 10/461b lim: 100 exec/s: 0 rss: 72Mb L: 80/80 MS: 1 ChangeBit- 00:08:18.593 [2024-07-25 09:23:31.190777] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:5859554002803446097 len:20992 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.593 [2024-07-25 09:23:31.190803] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:18.593 [2024-07-25 09:23:31.190883] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:5859554002803446097 len:20818 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.593 [2024-07-25 09:23:31.190898] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:18.593 [2024-07-25 09:23:31.190982] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:5859553999884210513 len:20818 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.593 [2024-07-25 09:23:31.191002] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:18.593 #38 NEW cov: 12292 ft: 14973 corp: 11/525b lim: 100 exec/s: 0 rss: 72Mb L: 64/80 MS: 1 InsertRepeatedBytes- 00:08:18.593 [2024-07-25 09:23:31.241198] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744072568700927 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.593 [2024-07-25 09:23:31.241225] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:18.593 [2024-07-25 09:23:31.241324] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.593 [2024-07-25 09:23:31.241339] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:18.593 [2024-07-25 09:23:31.241407] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:3761688987579986996 len:13365 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.593 [2024-07-25 09:23:31.241426] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:18.593 #39 NEW cov: 12292 ft: 14999 corp: 12/585b lim: 100 exec/s: 0 rss: 72Mb L: 60/80 MS: 1 InsertRepeatedBytes- 00:08:18.593 [2024-07-25 09:23:31.310803] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.593 [2024-07-25 09:23:31.310828] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:18.593 #40 NEW cov: 12292 ft: 15054 corp: 13/606b lim: 100 exec/s: 0 rss: 72Mb L: 21/80 MS: 1 CrossOver- 00:08:18.593 [2024-07-25 09:23:31.361027] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.593 [2024-07-25 09:23:31.361052] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:18.593 #41 NEW cov: 12292 ft: 15129 corp: 14/627b lim: 100 exec/s: 0 rss: 72Mb L: 21/80 MS: 1 ChangeByte- 00:08:18.851 [2024-07-25 09:23:31.432838] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:5859554002803446097 len:20818 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.851 [2024-07-25 09:23:31.432866] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:18.851 [2024-07-25 09:23:31.432982] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:10281347063045247407 len:45650 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.851 [2024-07-25 09:23:31.432999] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:18.851 [2024-07-25 09:23:31.433090] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.851 [2024-07-25 09:23:31.433111] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:18.851 [2024-07-25 09:23:31.433220] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.851 [2024-07-25 09:23:31.433240] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:18.851 [2024-07-25 09:23:31.433334] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:4 nsid:0 lba:22888881447763968 len:20818 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.851 [2024-07-25 09:23:31.433352] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:08:18.851 NEW_FUNC[1/1]: 0x1a8a050 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:08:18.851 #42 NEW cov: 12315 ft: 15300 corp: 15/727b lim: 100 exec/s: 0 rss: 73Mb L: 100/100 MS: 1 InsertRepeatedBytes- 00:08:18.852 [2024-07-25 09:23:31.502430] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:5859554002803446097 len:20992 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.852 [2024-07-25 09:23:31.502459] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:18.852 [2024-07-25 09:23:31.502545] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:5859554002803446097 len:20818 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.852 [2024-07-25 09:23:31.502563] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:18.852 [2024-07-25 09:23:31.502646] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:5859553999884210513 len:20786 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.852 [2024-07-25 09:23:31.502665] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:18.852 #43 NEW cov: 12315 ft: 15330 corp: 16/791b lim: 100 exec/s: 43 rss: 73Mb L: 64/100 MS: 1 ChangeByte- 00:08:18.852 [2024-07-25 09:23:31.571932] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.852 [2024-07-25 09:23:31.571960] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:18.852 #44 NEW cov: 12315 ft: 15358 corp: 17/812b lim: 100 exec/s: 44 rss: 73Mb L: 21/100 MS: 1 CMP- DE: "\000\027\254c\337e\263\246"- 00:08:18.852 [2024-07-25 09:23:31.622528] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.852 [2024-07-25 09:23:31.622559] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:18.852 [2024-07-25 09:23:31.622640] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.852 [2024-07-25 09:23:31.622656] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:18.852 #45 NEW cov: 12315 ft: 15366 corp: 18/852b lim: 100 exec/s: 45 rss: 73Mb L: 40/100 MS: 1 CopyPart- 00:08:19.110 [2024-07-25 09:23:31.682950] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744072568637439 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.110 [2024-07-25 09:23:31.682980] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:19.110 [2024-07-25 09:23:31.683062] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.110 [2024-07-25 09:23:31.683087] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:19.110 [2024-07-25 09:23:31.683168] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:3761688987579986996 len:13365 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.110 [2024-07-25 09:23:31.683186] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:19.110 #46 NEW cov: 12315 ft: 15376 corp: 19/912b lim: 100 exec/s: 46 rss: 73Mb L: 60/100 MS: 1 ChangeBinInt- 00:08:19.110 [2024-07-25 09:23:31.754179] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744072568637439 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.110 [2024-07-25 09:23:31.754207] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:19.110 [2024-07-25 09:23:31.754302] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.110 [2024-07-25 09:23:31.754318] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:19.110 [2024-07-25 09:23:31.754407] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:3761688987579986996 len:13365 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.110 [2024-07-25 09:23:31.754424] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:19.110 [2024-07-25 09:23:31.754515] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.110 [2024-07-25 09:23:31.754532] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:19.110 [2024-07-25 09:23:31.754623] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:4 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.110 [2024-07-25 09:23:31.754639] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:08:19.110 #47 NEW cov: 12315 ft: 15383 corp: 20/1012b lim: 100 exec/s: 47 rss: 73Mb L: 100/100 MS: 1 CrossOver- 00:08:19.110 [2024-07-25 09:23:31.824448] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:5859554002803446097 len:20818 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.110 [2024-07-25 09:23:31.824479] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:19.110 [2024-07-25 09:23:31.824558] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:10281347063045247407 len:45650 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.110 [2024-07-25 09:23:31.824576] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:19.110 [2024-07-25 09:23:31.824656] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.110 [2024-07-25 09:23:31.824678] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:19.110 [2024-07-25 09:23:31.824765] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.110 [2024-07-25 09:23:31.824785] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:19.110 [2024-07-25 09:23:31.824880] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:4 nsid:0 lba:22888881447763968 len:20818 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.110 [2024-07-25 09:23:31.824903] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:08:19.110 #48 NEW cov: 12315 ft: 15444 corp: 21/1112b lim: 100 exec/s: 48 rss: 73Mb L: 100/100 MS: 1 ShuffleBytes- 00:08:19.110 [2024-07-25 09:23:31.893938] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:16097469968653200483 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.110 [2024-07-25 09:23:31.893963] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:19.110 [2024-07-25 09:23:31.894053] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.110 [2024-07-25 09:23:31.894067] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:19.110 [2024-07-25 09:23:31.894158] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:3761688987579986996 len:13365 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.110 [2024-07-25 09:23:31.894176] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:19.110 #49 NEW cov: 12315 ft: 15445 corp: 22/1172b lim: 100 exec/s: 49 rss: 73Mb L: 60/100 MS: 1 PersAutoDict- DE: "\000\027\254c\337e\263\246"- 00:08:19.368 [2024-07-25 09:23:31.943359] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.368 [2024-07-25 09:23:31.943385] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:19.368 #50 NEW cov: 12315 ft: 15507 corp: 23/1192b lim: 100 exec/s: 50 rss: 73Mb L: 20/100 MS: 1 ShuffleBytes- 00:08:19.368 [2024-07-25 09:23:31.993903] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:16097469968653200483 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.368 [2024-07-25 09:23:31.993928] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:19.368 [2024-07-25 09:23:31.993996] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:3761688987579986996 len:13365 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.368 [2024-07-25 09:23:31.994017] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:19.368 #51 NEW cov: 12315 ft: 15540 corp: 24/1234b lim: 100 exec/s: 51 rss: 73Mb L: 42/100 MS: 1 EraseBytes- 00:08:19.368 [2024-07-25 09:23:32.054120] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744072568700927 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.368 [2024-07-25 09:23:32.054148] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:19.368 [2024-07-25 09:23:32.054222] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18416063301248090111 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.368 [2024-07-25 09:23:32.054239] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:19.368 #52 NEW cov: 12315 ft: 15544 corp: 25/1274b lim: 100 exec/s: 52 rss: 73Mb L: 40/100 MS: 1 ChangeByte- 00:08:19.368 [2024-07-25 09:23:32.104734] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:5859554002803446097 len:20818 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.368 [2024-07-25 09:23:32.104758] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:19.368 [2024-07-25 09:23:32.104831] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:10281347063045247407 len:45650 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.368 [2024-07-25 09:23:32.104844] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:19.368 #53 NEW cov: 12315 ft: 15562 corp: 26/1328b lim: 100 exec/s: 53 rss: 73Mb L: 54/100 MS: 1 ChangeBit- 00:08:19.368 [2024-07-25 09:23:32.154783] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.368 [2024-07-25 09:23:32.154808] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:19.627 #54 NEW cov: 12315 ft: 15571 corp: 27/1350b lim: 100 exec/s: 54 rss: 74Mb L: 22/100 MS: 1 InsertByte- 00:08:19.627 [2024-07-25 09:23:32.215405] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744072568700927 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.627 [2024-07-25 09:23:32.215430] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:19.627 [2024-07-25 09:23:32.215495] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18416063301248090111 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.627 [2024-07-25 09:23:32.215508] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:19.627 #55 NEW cov: 12315 ft: 15579 corp: 28/1390b lim: 100 exec/s: 55 rss: 74Mb L: 40/100 MS: 1 PersAutoDict- DE: "\017\000\000\000"- 00:08:19.627 [2024-07-25 09:23:32.275545] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:5859554002803446097 len:20818 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.627 [2024-07-25 09:23:32.275569] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:19.627 [2024-07-25 09:23:32.275630] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:10281347063045247407 len:45650 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.627 [2024-07-25 09:23:32.275646] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:19.627 #56 NEW cov: 12315 ft: 15608 corp: 29/1444b lim: 100 exec/s: 56 rss: 74Mb L: 54/100 MS: 1 ChangeByte- 00:08:19.627 [2024-07-25 09:23:32.325685] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:5859554002803446097 len:20818 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.627 [2024-07-25 09:23:32.325710] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:19.627 #57 NEW cov: 12315 ft: 15642 corp: 30/1482b lim: 100 exec/s: 57 rss: 74Mb L: 38/100 MS: 1 EraseBytes- 00:08:19.627 [2024-07-25 09:23:32.377104] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744072568700927 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.627 [2024-07-25 09:23:32.377127] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:19.627 [2024-07-25 09:23:32.377220] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.627 [2024-07-25 09:23:32.377252] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:19.627 [2024-07-25 09:23:32.377339] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.627 [2024-07-25 09:23:32.377357] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:19.627 [2024-07-25 09:23:32.377452] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.627 [2024-07-25 09:23:32.377466] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:19.627 #58 NEW cov: 12315 ft: 15653 corp: 31/1562b lim: 100 exec/s: 58 rss: 74Mb L: 80/100 MS: 1 ShuffleBytes- 00:08:19.627 [2024-07-25 09:23:32.427979] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744072568637439 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.627 [2024-07-25 09:23:32.428006] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:19.627 [2024-07-25 09:23:32.428115] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.627 [2024-07-25 09:23:32.428133] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:19.627 [2024-07-25 09:23:32.428220] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:3761688987579986996 len:13365 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.627 [2024-07-25 09:23:32.428238] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:19.627 [2024-07-25 09:23:32.428333] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.627 [2024-07-25 09:23:32.428349] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:19.628 [2024-07-25 09:23:32.428438] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:4 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.628 [2024-07-25 09:23:32.428454] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:08:19.887 #59 NEW cov: 12315 ft: 15678 corp: 32/1662b lim: 100 exec/s: 59 rss: 74Mb L: 100/100 MS: 1 CopyPart- 00:08:19.887 [2024-07-25 09:23:32.487888] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.887 [2024-07-25 09:23:32.487914] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:19.887 [2024-07-25 09:23:32.488009] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.887 [2024-07-25 09:23:32.488026] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:19.887 [2024-07-25 09:23:32.488114] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.887 [2024-07-25 09:23:32.488129] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:19.887 [2024-07-25 09:23:32.488223] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.887 [2024-07-25 09:23:32.488239] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:19.887 #61 NEW cov: 12315 ft: 15688 corp: 33/1754b lim: 100 exec/s: 61 rss: 74Mb L: 92/100 MS: 2 ChangeBinInt-InsertRepeatedBytes- 00:08:19.887 [2024-07-25 09:23:32.538444] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744072568700927 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.887 [2024-07-25 09:23:32.538470] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:19.887 [2024-07-25 09:23:32.538551] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.887 [2024-07-25 09:23:32.538568] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:19.887 [2024-07-25 09:23:32.538654] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.887 [2024-07-25 09:23:32.538674] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:19.887 [2024-07-25 09:23:32.538774] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.887 [2024-07-25 09:23:32.538793] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:19.887 #62 NEW cov: 12315 ft: 15709 corp: 34/1834b lim: 100 exec/s: 31 rss: 74Mb L: 80/100 MS: 1 CopyPart- 00:08:19.887 #62 DONE cov: 12315 ft: 15709 corp: 34/1834b lim: 100 exec/s: 31 rss: 74Mb 00:08:19.887 ###### Recommended dictionary. ###### 00:08:19.887 "\017\000\000\000" # Uses: 1 00:08:19.887 "\000\027\254c\337e\263\246" # Uses: 1 00:08:19.887 ###### End of recommended dictionary. ###### 00:08:19.887 Done 62 runs in 2 second(s) 00:08:19.887 09:23:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_24.conf /var/tmp/suppress_nvmf_fuzz 00:08:19.887 09:23:32 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:19.887 09:23:32 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:19.887 09:23:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@79 -- # trap - SIGINT SIGTERM EXIT 00:08:19.887 00:08:19.887 real 1m2.945s 00:08:19.887 user 1m44.817s 00:08:19.887 sys 0m5.934s 00:08:19.887 09:23:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:19.887 09:23:32 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:08:19.887 ************************************ 00:08:19.887 END TEST nvmf_llvm_fuzz 00:08:19.887 ************************************ 00:08:20.148 09:23:32 llvm_fuzz -- fuzz/llvm.sh@60 -- # for fuzzer in "${fuzzers[@]}" 00:08:20.148 09:23:32 llvm_fuzz -- fuzz/llvm.sh@61 -- # case "$fuzzer" in 00:08:20.148 09:23:32 llvm_fuzz -- fuzz/llvm.sh@63 -- # run_test vfio_llvm_fuzz /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/run.sh 00:08:20.148 09:23:32 llvm_fuzz -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:20.148 09:23:32 llvm_fuzz -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:20.148 09:23:32 llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:08:20.148 ************************************ 00:08:20.148 START TEST vfio_llvm_fuzz 00:08:20.148 ************************************ 00:08:20.148 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/run.sh 00:08:20.148 * Looking for test storage... 00:08:20.148 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:08:20.148 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@64 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/common.sh 00:08:20.148 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- setup/common.sh@6 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh 00:08:20.148 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:08:20.148 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@34 -- # set -e 00:08:20.148 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:08:20.148 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@36 -- # shopt -s extglob 00:08:20.148 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:08:20.148 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output ']' 00:08:20.148 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh ]] 00:08:20.148 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh 00:08:20.148 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:08:20.148 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:08:20.148 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:08:20.148 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:08:20.148 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:08:20.148 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:08:20.148 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:08:20.148 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:08:20.148 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:08:20.148 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:08:20.149 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:08:20.149 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:08:20.149 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:08:20.149 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:08:20.149 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:08:20.149 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:08:20.149 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:08:20.149 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:08:20.149 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:08:20.149 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:08:20.149 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:08:20.149 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@22 -- # CONFIG_CET=n 00:08:20.149 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:08:20.149 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:08:20.149 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:08:20.149 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:08:20.149 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:08:20.149 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:08:20.149 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:08:20.149 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:08:20.149 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:08:20.149 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:08:20.149 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:08:20.149 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB=/usr/lib64/clang/16/lib/libclang_rt.fuzzer_no_main-x86_64.a 00:08:20.149 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@35 -- # CONFIG_FUZZER=y 00:08:20.149 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:08:20.149 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:08:20.149 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:08:20.149 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:08:20.149 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:08:20.149 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:08:20.149 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:08:20.149 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:08:20.149 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:08:20.149 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:08:20.149 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:08:20.149 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:08:20.149 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:08:20.149 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:08:20.149 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:08:20.149 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:08:20.149 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:08:20.149 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:08:20.149 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:08:20.149 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:08:20.149 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:08:20.149 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:08:20.149 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:08:20.149 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:08:20.149 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:08:20.149 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:08:20.149 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:08:20.149 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:08:20.149 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:08:20.149 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:08:20.149 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@66 -- # CONFIG_SHARED=n 00:08:20.149 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:08:20.149 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:08:20.149 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:08:20.149 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@70 -- # CONFIG_FC=n 00:08:20.149 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:08:20.149 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:08:20.149 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:08:20.149 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:08:20.149 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:08:20.149 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:08:20.149 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:08:20.149 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:08:20.149 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:08:20.149 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:08:20.149 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:08:20.149 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:08:20.149 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@83 -- # CONFIG_URING=n 00:08:20.149 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:08:20.149 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:08:20.149 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:08:20.149 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:08:20.149 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:08:20.149 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:08:20.149 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:08:20.149 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:08:20.149 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:08:20.149 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:08:20.149 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:08:20.149 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:08:20.149 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:08:20.149 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:08:20.149 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/config.h ]] 00:08:20.149 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:08:20.149 #define SPDK_CONFIG_H 00:08:20.149 #define SPDK_CONFIG_APPS 1 00:08:20.149 #define SPDK_CONFIG_ARCH native 00:08:20.149 #undef SPDK_CONFIG_ASAN 00:08:20.149 #undef SPDK_CONFIG_AVAHI 00:08:20.149 #undef SPDK_CONFIG_CET 00:08:20.149 #define SPDK_CONFIG_COVERAGE 1 00:08:20.149 #define SPDK_CONFIG_CROSS_PREFIX 00:08:20.149 #undef SPDK_CONFIG_CRYPTO 00:08:20.149 #undef SPDK_CONFIG_CRYPTO_MLX5 00:08:20.149 #undef SPDK_CONFIG_CUSTOMOCF 00:08:20.149 #undef SPDK_CONFIG_DAOS 00:08:20.149 #define SPDK_CONFIG_DAOS_DIR 00:08:20.149 #define SPDK_CONFIG_DEBUG 1 00:08:20.149 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:08:20.149 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:08:20.149 #define SPDK_CONFIG_DPDK_INC_DIR 00:08:20.149 #define SPDK_CONFIG_DPDK_LIB_DIR 00:08:20.149 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:08:20.149 #undef SPDK_CONFIG_DPDK_UADK 00:08:20.150 #define SPDK_CONFIG_ENV /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:08:20.150 #define SPDK_CONFIG_EXAMPLES 1 00:08:20.150 #undef SPDK_CONFIG_FC 00:08:20.150 #define SPDK_CONFIG_FC_PATH 00:08:20.150 #define SPDK_CONFIG_FIO_PLUGIN 1 00:08:20.150 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:08:20.150 #undef SPDK_CONFIG_FUSE 00:08:20.150 #define SPDK_CONFIG_FUZZER 1 00:08:20.150 #define SPDK_CONFIG_FUZZER_LIB /usr/lib64/clang/16/lib/libclang_rt.fuzzer_no_main-x86_64.a 00:08:20.150 #undef SPDK_CONFIG_GOLANG 00:08:20.150 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:08:20.150 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:08:20.150 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:08:20.150 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:08:20.150 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:08:20.150 #undef SPDK_CONFIG_HAVE_LIBBSD 00:08:20.150 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:08:20.150 #define SPDK_CONFIG_IDXD 1 00:08:20.150 #define SPDK_CONFIG_IDXD_KERNEL 1 00:08:20.150 #undef SPDK_CONFIG_IPSEC_MB 00:08:20.150 #define SPDK_CONFIG_IPSEC_MB_DIR 00:08:20.150 #define SPDK_CONFIG_ISAL 1 00:08:20.150 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:08:20.150 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:08:20.150 #define SPDK_CONFIG_LIBDIR 00:08:20.150 #undef SPDK_CONFIG_LTO 00:08:20.150 #define SPDK_CONFIG_MAX_LCORES 128 00:08:20.150 #define SPDK_CONFIG_NVME_CUSE 1 00:08:20.150 #undef SPDK_CONFIG_OCF 00:08:20.150 #define SPDK_CONFIG_OCF_PATH 00:08:20.150 #define SPDK_CONFIG_OPENSSL_PATH 00:08:20.150 #undef SPDK_CONFIG_PGO_CAPTURE 00:08:20.150 #define SPDK_CONFIG_PGO_DIR 00:08:20.150 #undef SPDK_CONFIG_PGO_USE 00:08:20.150 #define SPDK_CONFIG_PREFIX /usr/local 00:08:20.150 #undef SPDK_CONFIG_RAID5F 00:08:20.150 #undef SPDK_CONFIG_RBD 00:08:20.150 #define SPDK_CONFIG_RDMA 1 00:08:20.150 #define SPDK_CONFIG_RDMA_PROV verbs 00:08:20.150 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:08:20.150 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:08:20.150 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:08:20.150 #undef SPDK_CONFIG_SHARED 00:08:20.150 #undef SPDK_CONFIG_SMA 00:08:20.150 #define SPDK_CONFIG_TESTS 1 00:08:20.150 #undef SPDK_CONFIG_TSAN 00:08:20.150 #define SPDK_CONFIG_UBLK 1 00:08:20.150 #define SPDK_CONFIG_UBSAN 1 00:08:20.150 #undef SPDK_CONFIG_UNIT_TESTS 00:08:20.150 #undef SPDK_CONFIG_URING 00:08:20.150 #define SPDK_CONFIG_URING_PATH 00:08:20.150 #undef SPDK_CONFIG_URING_ZNS 00:08:20.150 #undef SPDK_CONFIG_USDT 00:08:20.150 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:08:20.150 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:08:20.150 #define SPDK_CONFIG_VFIO_USER 1 00:08:20.150 #define SPDK_CONFIG_VFIO_USER_DIR 00:08:20.150 #define SPDK_CONFIG_VHOST 1 00:08:20.150 #define SPDK_CONFIG_VIRTIO 1 00:08:20.150 #undef SPDK_CONFIG_VTUNE 00:08:20.150 #define SPDK_CONFIG_VTUNE_DIR 00:08:20.150 #define SPDK_CONFIG_WERROR 1 00:08:20.150 #define SPDK_CONFIG_WPDK_DIR 00:08:20.150 #undef SPDK_CONFIG_XNVME 00:08:20.150 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:08:20.150 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:08:20.150 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:08:20.150 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:20.150 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:20.150 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:20.150 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:20.150 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:20.150 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:20.150 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- paths/export.sh@5 -- # export PATH 00:08:20.150 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:20.150 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:08:20.150 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- pm/common@6 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:08:20.150 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- pm/common@6 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:08:20.150 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:08:20.150 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- pm/common@7 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/../../../ 00:08:20.150 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:08:20.150 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- pm/common@64 -- # TEST_TAG=N/A 00:08:20.150 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.run_test_name 00:08:20.150 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power 00:08:20.150 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- pm/common@68 -- # uname -s 00:08:20.150 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- pm/common@68 -- # PM_OS=Linux 00:08:20.150 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:08:20.150 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:08:20.150 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:08:20.150 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:08:20.150 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:08:20.150 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:08:20.150 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- pm/common@76 -- # SUDO[0]= 00:08:20.150 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- pm/common@76 -- # SUDO[1]='sudo -E' 00:08:20.150 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:08:20.150 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:08:20.150 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- pm/common@81 -- # [[ Linux == Linux ]] 00:08:20.150 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:08:20.150 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:08:20.150 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:08:20.150 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:08:20.150 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power ]] 00:08:20.150 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@58 -- # : 0 00:08:20.150 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:08:20.150 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@62 -- # : 0 00:08:20.150 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:08:20.150 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@64 -- # : 0 00:08:20.150 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:08:20.150 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@66 -- # : 1 00:08:20.150 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:08:20.150 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@68 -- # : 0 00:08:20.150 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:08:20.150 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@70 -- # : 00:08:20.150 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:08:20.150 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@72 -- # : 0 00:08:20.150 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:08:20.150 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@74 -- # : 0 00:08:20.150 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:08:20.150 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@76 -- # : 0 00:08:20.150 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:08:20.150 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@78 -- # : 0 00:08:20.150 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:08:20.150 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@80 -- # : 0 00:08:20.150 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:08:20.150 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@82 -- # : 0 00:08:20.151 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:08:20.151 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@84 -- # : 0 00:08:20.151 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:08:20.151 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@86 -- # : 0 00:08:20.151 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:08:20.151 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@88 -- # : 0 00:08:20.151 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:08:20.151 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@90 -- # : 0 00:08:20.151 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:08:20.151 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@92 -- # : 0 00:08:20.151 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:08:20.151 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@94 -- # : 0 00:08:20.151 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:08:20.151 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@96 -- # : 0 00:08:20.151 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:08:20.151 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@98 -- # : 1 00:08:20.151 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:08:20.151 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@100 -- # : 1 00:08:20.151 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:08:20.151 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@102 -- # : rdma 00:08:20.151 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:08:20.151 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@104 -- # : 0 00:08:20.151 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:08:20.151 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@106 -- # : 0 00:08:20.151 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:08:20.151 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@108 -- # : 0 00:08:20.151 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:08:20.151 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@110 -- # : 0 00:08:20.151 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:08:20.151 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@112 -- # : 0 00:08:20.151 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:08:20.151 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@114 -- # : 0 00:08:20.151 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:08:20.151 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@116 -- # : 0 00:08:20.151 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:08:20.151 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@118 -- # : 0 00:08:20.151 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:08:20.151 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@120 -- # : 0 00:08:20.151 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:08:20.151 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@122 -- # : 1 00:08:20.151 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:08:20.151 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@124 -- # : 00:08:20.151 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:08:20.151 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@126 -- # : 0 00:08:20.151 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:08:20.151 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@128 -- # : 0 00:08:20.151 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:08:20.151 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@130 -- # : 0 00:08:20.151 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:08:20.151 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@132 -- # : 0 00:08:20.151 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:08:20.151 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@134 -- # : 0 00:08:20.151 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:08:20.151 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@136 -- # : 0 00:08:20.151 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:08:20.151 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@138 -- # : 00:08:20.151 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:08:20.151 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@140 -- # : true 00:08:20.151 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:08:20.151 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@142 -- # : 0 00:08:20.151 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:08:20.151 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@144 -- # : 0 00:08:20.151 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:08:20.151 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@146 -- # : 0 00:08:20.151 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:08:20.151 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@148 -- # : 0 00:08:20.151 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:08:20.151 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@150 -- # : 0 00:08:20.151 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:08:20.151 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@152 -- # : 0 00:08:20.151 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:08:20.151 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@154 -- # : 00:08:20.151 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:08:20.151 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@156 -- # : 0 00:08:20.151 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:08:20.151 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@158 -- # : 0 00:08:20.151 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:08:20.151 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@160 -- # : 0 00:08:20.151 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:08:20.151 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@162 -- # : 0 00:08:20.151 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:08:20.151 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@164 -- # : 0 00:08:20.151 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:08:20.151 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@166 -- # : 0 00:08:20.151 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:08:20.151 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@169 -- # : 00:08:20.151 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:08:20.151 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@171 -- # : 0 00:08:20.151 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:08:20.151 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@173 -- # : 0 00:08:20.151 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:08:20.151 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@177 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:08:20.151 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@177 -- # SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:08:20.151 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@178 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:08:20.151 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@178 -- # DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:08:20.152 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@179 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:20.152 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@179 -- # VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:20.152 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@180 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:20.152 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@180 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:20.152 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@183 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:08:20.152 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@183 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:08:20.152 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@187 -- # export PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:08:20.152 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@187 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:08:20.152 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@191 -- # export PYTHONDONTWRITEBYTECODE=1 00:08:20.152 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@191 -- # PYTHONDONTWRITEBYTECODE=1 00:08:20.152 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@195 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:08:20.152 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@195 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:08:20.152 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@196 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:08:20.152 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@196 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:08:20.152 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@200 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:08:20.152 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@201 -- # rm -rf /var/tmp/asan_suppression_file 00:08:20.152 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@202 -- # cat 00:08:20.152 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@238 -- # echo leak:libfuse3.so 00:08:20.152 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@240 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:08:20.152 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@240 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:08:20.152 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@242 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:08:20.152 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@242 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:08:20.152 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@244 -- # '[' -z /var/spdk/dependencies ']' 00:08:20.152 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@247 -- # export DEPENDENCY_DIR 00:08:20.152 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@251 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:08:20.152 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@251 -- # SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:08:20.152 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@252 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:08:20.152 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@252 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:08:20.152 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@255 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:20.152 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@255 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:20.152 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@256 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:20.152 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@256 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:20.152 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@258 -- # export AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:08:20.152 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@258 -- # AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:08:20.152 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@261 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:20.152 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@261 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:20.152 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@264 -- # '[' 0 -eq 0 ']' 00:08:20.152 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@265 -- # export valgrind= 00:08:20.152 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@265 -- # valgrind= 00:08:20.152 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@271 -- # uname -s 00:08:20.152 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@271 -- # '[' Linux = Linux ']' 00:08:20.152 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@272 -- # HUGEMEM=4096 00:08:20.152 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@273 -- # export CLEAR_HUGE=yes 00:08:20.152 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@273 -- # CLEAR_HUGE=yes 00:08:20.152 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@274 -- # [[ 0 -eq 1 ]] 00:08:20.152 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@274 -- # [[ 0 -eq 1 ]] 00:08:20.152 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@281 -- # MAKE=make 00:08:20.152 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@282 -- # MAKEFLAGS=-j88 00:08:20.152 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@298 -- # export HUGEMEM=4096 00:08:20.152 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@298 -- # HUGEMEM=4096 00:08:20.152 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@300 -- # NO_HUGE=() 00:08:20.152 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@301 -- # TEST_MODE= 00:08:20.152 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@320 -- # [[ -z 425808 ]] 00:08:20.152 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@320 -- # kill -0 425808 00:08:20.152 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:08:20.152 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@330 -- # [[ -v testdir ]] 00:08:20.152 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@332 -- # local requested_size=2147483648 00:08:20.152 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@333 -- # local mount target_dir 00:08:20.152 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@335 -- # local -A mounts fss sizes avails uses 00:08:20.152 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@336 -- # local source fs size avail mount use 00:08:20.152 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@338 -- # local storage_fallback storage_candidates 00:08:20.152 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@340 -- # mktemp -udt spdk.XXXXXX 00:08:20.152 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@340 -- # storage_fallback=/tmp/spdk.ltb4NF 00:08:20.152 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@345 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:08:20.152 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@347 -- # [[ -n '' ]] 00:08:20.152 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@352 -- # [[ -n '' ]] 00:08:20.152 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@357 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio /tmp/spdk.ltb4NF/tests/vfio /tmp/spdk.ltb4NF 00:08:20.152 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@360 -- # requested_size=2214592512 00:08:20.152 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:08:20.152 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@329 -- # df -T 00:08:20.152 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@329 -- # grep -v Filesystem 00:08:20.152 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@363 -- # mounts["$mount"]=spdk_devtmpfs 00:08:20.152 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@363 -- # fss["$mount"]=devtmpfs 00:08:20.152 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@364 -- # avails["$mount"]=67108864 00:08:20.152 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@364 -- # sizes["$mount"]=67108864 00:08:20.153 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@365 -- # uses["$mount"]=0 00:08:20.153 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:08:20.153 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@363 -- # mounts["$mount"]=/dev/pmem0 00:08:20.153 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@363 -- # fss["$mount"]=ext2 00:08:20.153 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@364 -- # avails["$mount"]=948682752 00:08:20.153 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@364 -- # sizes["$mount"]=5284429824 00:08:20.153 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@365 -- # uses["$mount"]=4335747072 00:08:20.153 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:08:20.153 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@363 -- # mounts["$mount"]=spdk_root 00:08:20.153 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@363 -- # fss["$mount"]=overlay 00:08:20.153 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@364 -- # avails["$mount"]=54561505280 00:08:20.153 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@364 -- # sizes["$mount"]=61742047232 00:08:20.153 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@365 -- # uses["$mount"]=7180541952 00:08:20.153 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:08:20.153 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:08:20.153 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:08:20.153 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@364 -- # avails["$mount"]=30867648512 00:08:20.153 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@364 -- # sizes["$mount"]=30871023616 00:08:20.153 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@365 -- # uses["$mount"]=3375104 00:08:20.153 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:08:20.153 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:08:20.153 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:08:20.153 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@364 -- # avails["$mount"]=12342374400 00:08:20.153 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@364 -- # sizes["$mount"]=12348411904 00:08:20.153 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@365 -- # uses["$mount"]=6037504 00:08:20.153 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:08:20.153 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:08:20.153 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:08:20.153 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@364 -- # avails["$mount"]=30870654976 00:08:20.153 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@364 -- # sizes["$mount"]=30871023616 00:08:20.153 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@365 -- # uses["$mount"]=368640 00:08:20.153 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:08:20.153 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:08:20.153 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:08:20.153 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@364 -- # avails["$mount"]=6174199808 00:08:20.153 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@364 -- # sizes["$mount"]=6174203904 00:08:20.413 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@365 -- # uses["$mount"]=4096 00:08:20.413 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:08:20.413 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@368 -- # printf '* Looking for test storage...\n' 00:08:20.413 * Looking for test storage... 00:08:20.413 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@370 -- # local target_space new_size 00:08:20.413 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@371 -- # for target_dir in "${storage_candidates[@]}" 00:08:20.413 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # df /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:08:20.413 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # awk '$1 !~ /Filesystem/{print $6}' 00:08:20.413 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # mount=/ 00:08:20.413 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@376 -- # target_space=54561505280 00:08:20.413 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@377 -- # (( target_space == 0 || target_space < requested_size )) 00:08:20.413 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@380 -- # (( target_space >= requested_size )) 00:08:20.413 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@382 -- # [[ overlay == tmpfs ]] 00:08:20.413 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@382 -- # [[ overlay == ramfs ]] 00:08:20.413 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@382 -- # [[ / == / ]] 00:08:20.413 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@383 -- # new_size=9395134464 00:08:20.413 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@384 -- # (( new_size * 100 / sizes[/] > 95 )) 00:08:20.413 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@389 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:08:20.413 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@389 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:08:20.413 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@390 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:08:20.413 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:08:20.413 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@391 -- # return 0 00:08:20.413 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1682 -- # set -o errtrace 00:08:20.413 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:08:20.413 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:08:20.413 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:08:20.413 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1687 -- # true 00:08:20.413 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1689 -- # xtrace_fd 00:08:20.413 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:08:20.413 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:08:20.413 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@27 -- # exec 00:08:20.414 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@29 -- # exec 00:08:20.414 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@31 -- # xtrace_restore 00:08:20.414 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:08:20.414 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:08:20.414 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@18 -- # set -x 00:08:20.414 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@65 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/../common.sh 00:08:20.414 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@8 -- # pids=() 00:08:20.414 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@67 -- # fuzzfile=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c 00:08:20.414 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@68 -- # grep -c '\.fn =' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c 00:08:20.414 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@68 -- # fuzz_num=7 00:08:20.414 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@69 -- # (( fuzz_num != 0 )) 00:08:20.414 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@71 -- # trap 'cleanup /tmp/vfio-user-* /var/tmp/suppress_vfio_fuzz; exit 1' SIGINT SIGTERM EXIT 00:08:20.414 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@74 -- # mem_size=0 00:08:20.414 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@75 -- # [[ 1 -eq 1 ]] 00:08:20.414 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@76 -- # start_llvm_fuzz_short 7 1 00:08:20.414 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@69 -- # local fuzz_num=7 00:08:20.414 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@70 -- # local time=1 00:08:20.414 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i = 0 )) 00:08:20.414 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:20.414 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 0 1 0x1 00:08:20.414 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=0 00:08:20.414 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:08:20.414 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:08:20.414 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 00:08:20.414 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-0 00:08:20.414 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-0/domain/1 00:08:20.414 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-0/domain/2 00:08:20.414 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-0/fuzz_vfio_json.conf 00:08:20.414 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:08:20.414 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:08:20.414 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-0 /tmp/vfio-user-0/domain/1 /tmp/vfio-user-0/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 00:08:20.414 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-0/domain/1%; 00:08:20.414 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-0/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:08:20.414 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:20.414 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:08:20.414 09:23:32 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-0/domain/1 -c /tmp/vfio-user-0/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 -Y /tmp/vfio-user-0/domain/2 -r /tmp/vfio-user-0/spdk0.sock -Z 0 00:08:20.414 [2024-07-25 09:23:33.017542] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:20.414 [2024-07-25 09:23:33.017612] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid425912 ] 00:08:20.414 EAL: No free 2048 kB hugepages reported on node 1 00:08:20.414 [2024-07-25 09:23:33.081390] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:20.414 [2024-07-25 09:23:33.155464] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:20.672 INFO: Running with entropic power schedule (0xFF, 100). 00:08:20.672 INFO: Seed: 1443496079 00:08:20.672 INFO: Loaded 1 modules (356297 inline 8-bit counters): 356297 [0x2987e8c, 0x29dee55), 00:08:20.672 INFO: Loaded 1 PC tables (356297 PCs): 356297 [0x29dee58,0x2f4eae8), 00:08:20.672 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 00:08:20.672 INFO: A corpus is not provided, starting from an empty corpus 00:08:20.672 #2 INITED exec/s: 0 rss: 66Mb 00:08:20.672 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:20.672 This may also happen if the target rejected all inputs we tried so far 00:08:20.672 [2024-07-25 09:23:33.401697] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-0/domain/2: enabling controller 00:08:20.930 NEW_FUNC[1/659]: 0x4838a0 in fuzz_vfio_user_region_rw /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:84 00:08:20.930 NEW_FUNC[2/659]: 0x4893b0 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:08:20.930 #19 NEW cov: 10979 ft: 10754 corp: 2/7b lim: 6 exec/s: 0 rss: 72Mb L: 6/6 MS: 2 ShuffleBytes-InsertRepeatedBytes- 00:08:21.188 #25 NEW cov: 10993 ft: 13896 corp: 3/13b lim: 6 exec/s: 0 rss: 72Mb L: 6/6 MS: 1 ShuffleBytes- 00:08:21.188 #31 NEW cov: 10993 ft: 15756 corp: 4/19b lim: 6 exec/s: 0 rss: 74Mb L: 6/6 MS: 1 ChangeBinInt- 00:08:21.447 #32 NEW cov: 10993 ft: 16525 corp: 5/25b lim: 6 exec/s: 0 rss: 74Mb L: 6/6 MS: 1 ShuffleBytes- 00:08:21.447 #33 NEW cov: 10993 ft: 16776 corp: 6/31b lim: 6 exec/s: 0 rss: 74Mb L: 6/6 MS: 1 CrossOver- 00:08:21.447 NEW_FUNC[1/1]: 0x1a56580 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:08:21.447 #34 NEW cov: 11010 ft: 17125 corp: 7/37b lim: 6 exec/s: 0 rss: 74Mb L: 6/6 MS: 1 ChangeBit- 00:08:21.705 #40 NEW cov: 11010 ft: 17412 corp: 8/43b lim: 6 exec/s: 40 rss: 74Mb L: 6/6 MS: 1 CrossOver- 00:08:21.705 #41 NEW cov: 11010 ft: 17438 corp: 9/49b lim: 6 exec/s: 41 rss: 74Mb L: 6/6 MS: 1 ShuffleBytes- 00:08:21.963 #42 NEW cov: 11010 ft: 17884 corp: 10/55b lim: 6 exec/s: 42 rss: 74Mb L: 6/6 MS: 1 ShuffleBytes- 00:08:21.963 #43 NEW cov: 11010 ft: 18863 corp: 11/61b lim: 6 exec/s: 43 rss: 74Mb L: 6/6 MS: 1 ChangeBit- 00:08:22.221 #46 NEW cov: 11010 ft: 19135 corp: 12/67b lim: 6 exec/s: 46 rss: 74Mb L: 6/6 MS: 3 InsertByte-InsertByte-CopyPart- 00:08:22.221 #48 NEW cov: 11010 ft: 19177 corp: 13/73b lim: 6 exec/s: 48 rss: 74Mb L: 6/6 MS: 2 EraseBytes-CrossOver- 00:08:22.479 #49 NEW cov: 11010 ft: 19277 corp: 14/79b lim: 6 exec/s: 49 rss: 74Mb L: 6/6 MS: 1 CrossOver- 00:08:22.479 #50 NEW cov: 11017 ft: 19298 corp: 15/85b lim: 6 exec/s: 50 rss: 74Mb L: 6/6 MS: 1 ChangeByte- 00:08:22.738 #51 NEW cov: 11017 ft: 19439 corp: 16/91b lim: 6 exec/s: 25 rss: 74Mb L: 6/6 MS: 1 ChangeBinInt- 00:08:22.738 #51 DONE cov: 11017 ft: 19439 corp: 16/91b lim: 6 exec/s: 25 rss: 74Mb 00:08:22.738 Done 51 runs in 2 second(s) 00:08:22.738 [2024-07-25 09:23:35.381243] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-0/domain/2: disabling controller 00:08:22.998 09:23:35 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-0 /var/tmp/suppress_vfio_fuzz 00:08:22.998 09:23:35 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:22.998 09:23:35 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:22.998 09:23:35 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 1 1 0x1 00:08:22.998 09:23:35 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=1 00:08:22.998 09:23:35 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:08:22.998 09:23:35 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:08:22.998 09:23:35 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 00:08:22.998 09:23:35 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-1 00:08:22.998 09:23:35 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-1/domain/1 00:08:22.998 09:23:35 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-1/domain/2 00:08:22.998 09:23:35 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-1/fuzz_vfio_json.conf 00:08:22.998 09:23:35 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:08:22.998 09:23:35 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:08:22.998 09:23:35 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-1 /tmp/vfio-user-1/domain/1 /tmp/vfio-user-1/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 00:08:22.998 09:23:35 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-1/domain/1%; 00:08:22.998 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-1/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:08:22.998 09:23:35 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:22.998 09:23:35 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:08:22.998 09:23:35 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-1/domain/1 -c /tmp/vfio-user-1/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 -Y /tmp/vfio-user-1/domain/2 -r /tmp/vfio-user-1/spdk1.sock -Z 1 00:08:22.998 [2024-07-25 09:23:35.665592] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:22.998 [2024-07-25 09:23:35.665675] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid426305 ] 00:08:22.998 EAL: No free 2048 kB hugepages reported on node 1 00:08:22.998 [2024-07-25 09:23:35.730163] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:22.998 [2024-07-25 09:23:35.803008] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:23.256 INFO: Running with entropic power schedule (0xFF, 100). 00:08:23.256 INFO: Seed: 4083528813 00:08:23.256 INFO: Loaded 1 modules (356297 inline 8-bit counters): 356297 [0x2987e8c, 0x29dee55), 00:08:23.256 INFO: Loaded 1 PC tables (356297 PCs): 356297 [0x29dee58,0x2f4eae8), 00:08:23.256 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 00:08:23.256 INFO: A corpus is not provided, starting from an empty corpus 00:08:23.256 #2 INITED exec/s: 0 rss: 66Mb 00:08:23.256 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:23.256 This may also happen if the target rejected all inputs we tried so far 00:08:23.256 [2024-07-25 09:23:36.041138] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-1/domain/2: enabling controller 00:08:23.256 [2024-07-25 09:23:36.063116] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:23.256 [2024-07-25 09:23:36.063143] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:23.256 [2024-07-25 09:23:36.063157] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:23.515 NEW_FUNC[1/660]: 0x483e40 in fuzz_vfio_user_version /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:71 00:08:23.515 NEW_FUNC[2/660]: 0x4893b0 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:08:23.515 #25 NEW cov: 10966 ft: 10835 corp: 2/5b lim: 4 exec/s: 0 rss: 72Mb L: 4/4 MS: 3 InsertByte-InsertByte-CopyPart- 00:08:23.515 [2024-07-25 09:23:36.303369] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:23.515 [2024-07-25 09:23:36.303403] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:23.515 [2024-07-25 09:23:36.303419] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:23.773 #26 NEW cov: 10985 ft: 13931 corp: 3/9b lim: 4 exec/s: 0 rss: 72Mb L: 4/4 MS: 1 ChangeBinInt- 00:08:23.773 [2024-07-25 09:23:36.430440] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:23.773 [2024-07-25 09:23:36.430466] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:23.773 [2024-07-25 09:23:36.430481] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:23.773 #32 NEW cov: 10985 ft: 14825 corp: 4/13b lim: 4 exec/s: 0 rss: 74Mb L: 4/4 MS: 1 ShuffleBytes- 00:08:23.773 [2024-07-25 09:23:36.546530] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:23.773 [2024-07-25 09:23:36.546554] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:23.773 [2024-07-25 09:23:36.546569] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:24.031 #33 NEW cov: 10985 ft: 15452 corp: 5/17b lim: 4 exec/s: 0 rss: 74Mb L: 4/4 MS: 1 ChangeBinInt- 00:08:24.031 [2024-07-25 09:23:36.662476] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:24.031 [2024-07-25 09:23:36.662499] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:24.031 [2024-07-25 09:23:36.662514] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:24.031 #39 NEW cov: 10985 ft: 16218 corp: 6/21b lim: 4 exec/s: 0 rss: 74Mb L: 4/4 MS: 1 ChangeByte- 00:08:24.031 [2024-07-25 09:23:36.788501] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:24.031 [2024-07-25 09:23:36.788524] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:24.031 [2024-07-25 09:23:36.788538] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:24.289 NEW_FUNC[1/1]: 0x1a56580 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:08:24.289 #45 NEW cov: 11002 ft: 16410 corp: 7/25b lim: 4 exec/s: 0 rss: 74Mb L: 4/4 MS: 1 CrossOver- 00:08:24.289 [2024-07-25 09:23:36.904456] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:24.289 [2024-07-25 09:23:36.904480] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:24.289 [2024-07-25 09:23:36.904495] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:24.289 #46 NEW cov: 11002 ft: 16753 corp: 8/29b lim: 4 exec/s: 0 rss: 74Mb L: 4/4 MS: 1 ShuffleBytes- 00:08:24.289 [2024-07-25 09:23:37.031710] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:24.289 [2024-07-25 09:23:37.031734] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:24.289 [2024-07-25 09:23:37.031749] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:24.548 #47 NEW cov: 11002 ft: 16903 corp: 9/33b lim: 4 exec/s: 47 rss: 74Mb L: 4/4 MS: 1 ChangeBinInt- 00:08:24.548 [2024-07-25 09:23:37.148658] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:24.548 [2024-07-25 09:23:37.148682] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:24.548 [2024-07-25 09:23:37.148697] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:24.548 #48 NEW cov: 11002 ft: 17009 corp: 10/37b lim: 4 exec/s: 48 rss: 74Mb L: 4/4 MS: 1 CrossOver- 00:08:24.548 [2024-07-25 09:23:37.265892] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:24.548 [2024-07-25 09:23:37.265916] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:24.548 [2024-07-25 09:23:37.265931] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:24.548 #49 NEW cov: 11002 ft: 17359 corp: 11/41b lim: 4 exec/s: 49 rss: 74Mb L: 4/4 MS: 1 CopyPart- 00:08:24.806 [2024-07-25 09:23:37.381789] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:24.806 [2024-07-25 09:23:37.381812] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:24.806 [2024-07-25 09:23:37.381826] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:24.806 #50 NEW cov: 11002 ft: 17746 corp: 12/45b lim: 4 exec/s: 50 rss: 74Mb L: 4/4 MS: 1 ChangeBit- 00:08:24.806 [2024-07-25 09:23:37.507866] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:24.806 [2024-07-25 09:23:37.507886] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:24.806 [2024-07-25 09:23:37.507900] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:24.806 #51 NEW cov: 11002 ft: 17767 corp: 13/49b lim: 4 exec/s: 51 rss: 74Mb L: 4/4 MS: 1 CopyPart- 00:08:25.064 [2024-07-25 09:23:37.623803] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:25.064 [2024-07-25 09:23:37.623826] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:25.064 [2024-07-25 09:23:37.623840] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:25.064 #52 NEW cov: 11002 ft: 17859 corp: 14/53b lim: 4 exec/s: 52 rss: 74Mb L: 4/4 MS: 1 ChangeBit- 00:08:25.064 [2024-07-25 09:23:37.739753] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:25.064 [2024-07-25 09:23:37.739779] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:25.064 [2024-07-25 09:23:37.739794] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:25.064 #53 NEW cov: 11009 ft: 17990 corp: 15/57b lim: 4 exec/s: 53 rss: 74Mb L: 4/4 MS: 1 ChangeBinInt- 00:08:25.064 [2024-07-25 09:23:37.855574] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:25.064 [2024-07-25 09:23:37.855595] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:25.064 [2024-07-25 09:23:37.855609] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:25.323 #64 NEW cov: 11009 ft: 18053 corp: 16/61b lim: 4 exec/s: 64 rss: 74Mb L: 4/4 MS: 1 ChangeBit- 00:08:25.323 [2024-07-25 09:23:37.971613] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:25.323 [2024-07-25 09:23:37.971636] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:25.323 [2024-07-25 09:23:37.971650] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:25.323 #65 NEW cov: 11009 ft: 18070 corp: 17/65b lim: 4 exec/s: 32 rss: 74Mb L: 4/4 MS: 1 ChangeByte- 00:08:25.323 #65 DONE cov: 11009 ft: 18070 corp: 17/65b lim: 4 exec/s: 32 rss: 74Mb 00:08:25.323 Done 65 runs in 2 second(s) 00:08:25.323 [2024-07-25 09:23:38.063255] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-1/domain/2: disabling controller 00:08:25.581 09:23:38 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-1 /var/tmp/suppress_vfio_fuzz 00:08:25.581 09:23:38 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:25.581 09:23:38 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:25.581 09:23:38 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 2 1 0x1 00:08:25.581 09:23:38 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=2 00:08:25.581 09:23:38 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:08:25.581 09:23:38 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:08:25.581 09:23:38 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 00:08:25.581 09:23:38 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-2 00:08:25.581 09:23:38 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-2/domain/1 00:08:25.581 09:23:38 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-2/domain/2 00:08:25.581 09:23:38 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-2/fuzz_vfio_json.conf 00:08:25.581 09:23:38 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:08:25.581 09:23:38 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:08:25.581 09:23:38 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-2 /tmp/vfio-user-2/domain/1 /tmp/vfio-user-2/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 00:08:25.581 09:23:38 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-2/domain/1%; 00:08:25.581 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-2/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:08:25.581 09:23:38 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:25.581 09:23:38 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:08:25.581 09:23:38 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-2/domain/1 -c /tmp/vfio-user-2/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 -Y /tmp/vfio-user-2/domain/2 -r /tmp/vfio-user-2/spdk2.sock -Z 2 00:08:25.581 [2024-07-25 09:23:38.342833] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:25.581 [2024-07-25 09:23:38.342922] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid426736 ] 00:08:25.581 EAL: No free 2048 kB hugepages reported on node 1 00:08:25.839 [2024-07-25 09:23:38.406469] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:25.839 [2024-07-25 09:23:38.479122] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:26.098 INFO: Running with entropic power schedule (0xFF, 100). 00:08:26.098 INFO: Seed: 2468523089 00:08:26.098 INFO: Loaded 1 modules (356297 inline 8-bit counters): 356297 [0x2987e8c, 0x29dee55), 00:08:26.098 INFO: Loaded 1 PC tables (356297 PCs): 356297 [0x29dee58,0x2f4eae8), 00:08:26.098 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 00:08:26.098 INFO: A corpus is not provided, starting from an empty corpus 00:08:26.098 #2 INITED exec/s: 0 rss: 66Mb 00:08:26.098 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:26.098 This may also happen if the target rejected all inputs we tried so far 00:08:26.098 [2024-07-25 09:23:38.719065] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-2/domain/2: enabling controller 00:08:26.098 [2024-07-25 09:23:38.776127] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:08:26.356 NEW_FUNC[1/659]: 0x484820 in fuzz_vfio_user_get_region_info /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:103 00:08:26.356 NEW_FUNC[2/659]: 0x4893b0 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:08:26.356 #22 NEW cov: 10957 ft: 10917 corp: 2/9b lim: 8 exec/s: 0 rss: 71Mb L: 8/8 MS: 5 ChangeBit-CopyPart-ChangeByte-ShuffleBytes-InsertRepeatedBytes- 00:08:26.356 [2024-07-25 09:23:39.080549] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:08:26.614 NEW_FUNC[1/1]: 0x1ec2f30 in spdk_bit_array_get /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/util/bit_array.c:152 00:08:26.614 #23 NEW cov: 10976 ft: 15014 corp: 3/17b lim: 8 exec/s: 0 rss: 72Mb L: 8/8 MS: 1 ShuffleBytes- 00:08:26.614 [2024-07-25 09:23:39.282031] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:08:26.614 #27 NEW cov: 10976 ft: 15908 corp: 4/25b lim: 8 exec/s: 0 rss: 73Mb L: 8/8 MS: 4 EraseBytes-CrossOver-ShuffleBytes-CrossOver- 00:08:26.872 [2024-07-25 09:23:39.474519] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-2/domain/1: msg0: no payload for cmd5 00:08:26.872 [2024-07-25 09:23:39.474550] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 5 return failure 00:08:26.872 NEW_FUNC[1/2]: 0x13ebc70 in vfio_user_log /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/vfio_user.c:3094 00:08:26.872 NEW_FUNC[2/2]: 0x1a56580 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:08:26.872 #28 NEW cov: 11003 ft: 16450 corp: 5/33b lim: 8 exec/s: 0 rss: 74Mb L: 8/8 MS: 1 CMP- DE: "\177\000\000\000\000\000\000\000"- 00:08:26.872 [2024-07-25 09:23:39.679898] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:08:27.131 #29 NEW cov: 11003 ft: 16585 corp: 6/41b lim: 8 exec/s: 29 rss: 74Mb L: 8/8 MS: 1 ChangeBinInt- 00:08:27.131 [2024-07-25 09:23:39.868141] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-2/domain/1: msg0: no payload for cmd5 00:08:27.131 [2024-07-25 09:23:39.868169] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 5 return failure 00:08:27.388 #30 NEW cov: 11003 ft: 17019 corp: 7/49b lim: 8 exec/s: 30 rss: 74Mb L: 8/8 MS: 1 ChangeByte- 00:08:27.388 [2024-07-25 09:23:40.058739] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:08:27.388 #31 NEW cov: 11003 ft: 17353 corp: 8/57b lim: 8 exec/s: 31 rss: 74Mb L: 8/8 MS: 1 ChangeBit- 00:08:27.645 [2024-07-25 09:23:40.247482] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:08:27.645 #32 NEW cov: 11003 ft: 17400 corp: 9/65b lim: 8 exec/s: 32 rss: 74Mb L: 8/8 MS: 1 ChangeBinInt- 00:08:27.645 [2024-07-25 09:23:40.435528] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:08:27.902 #33 NEW cov: 11010 ft: 17493 corp: 10/73b lim: 8 exec/s: 33 rss: 74Mb L: 8/8 MS: 1 ChangeBit- 00:08:27.902 [2024-07-25 09:23:40.626458] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:08:28.160 #34 NEW cov: 11010 ft: 17513 corp: 11/81b lim: 8 exec/s: 17 rss: 74Mb L: 8/8 MS: 1 ShuffleBytes- 00:08:28.160 #34 DONE cov: 11010 ft: 17513 corp: 11/81b lim: 8 exec/s: 17 rss: 74Mb 00:08:28.160 ###### Recommended dictionary. ###### 00:08:28.160 "\177\000\000\000\000\000\000\000" # Uses: 0 00:08:28.160 ###### End of recommended dictionary. ###### 00:08:28.160 Done 34 runs in 2 second(s) 00:08:28.160 [2024-07-25 09:23:40.754255] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-2/domain/2: disabling controller 00:08:28.419 09:23:40 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-2 /var/tmp/suppress_vfio_fuzz 00:08:28.419 09:23:40 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:28.419 09:23:40 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:28.419 09:23:40 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 3 1 0x1 00:08:28.419 09:23:40 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=3 00:08:28.419 09:23:40 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:08:28.419 09:23:40 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:08:28.419 09:23:40 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 00:08:28.419 09:23:40 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-3 00:08:28.419 09:23:40 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-3/domain/1 00:08:28.419 09:23:40 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-3/domain/2 00:08:28.419 09:23:41 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-3/fuzz_vfio_json.conf 00:08:28.419 09:23:41 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:08:28.419 09:23:41 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:08:28.419 09:23:41 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-3 /tmp/vfio-user-3/domain/1 /tmp/vfio-user-3/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 00:08:28.419 09:23:41 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-3/domain/1%; 00:08:28.419 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-3/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:08:28.419 09:23:41 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:28.419 09:23:41 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:08:28.419 09:23:41 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-3/domain/1 -c /tmp/vfio-user-3/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 -Y /tmp/vfio-user-3/domain/2 -r /tmp/vfio-user-3/spdk3.sock -Z 3 00:08:28.419 [2024-07-25 09:23:41.035026] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:28.419 [2024-07-25 09:23:41.035110] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid427172 ] 00:08:28.419 EAL: No free 2048 kB hugepages reported on node 1 00:08:28.419 [2024-07-25 09:23:41.099942] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:28.419 [2024-07-25 09:23:41.174431] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:28.677 INFO: Running with entropic power schedule (0xFF, 100). 00:08:28.678 INFO: Seed: 872561955 00:08:28.678 INFO: Loaded 1 modules (356297 inline 8-bit counters): 356297 [0x2987e8c, 0x29dee55), 00:08:28.678 INFO: Loaded 1 PC tables (356297 PCs): 356297 [0x29dee58,0x2f4eae8), 00:08:28.678 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 00:08:28.678 INFO: A corpus is not provided, starting from an empty corpus 00:08:28.678 #2 INITED exec/s: 0 rss: 66Mb 00:08:28.678 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:28.678 This may also happen if the target rejected all inputs we tried so far 00:08:28.678 [2024-07-25 09:23:41.417904] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-3/domain/2: enabling controller 00:08:28.937 NEW_FUNC[1/660]: 0x484f00 in fuzz_vfio_user_dma_map /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:124 00:08:28.937 NEW_FUNC[2/660]: 0x4893b0 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:08:28.937 #92 NEW cov: 10967 ft: 10919 corp: 2/33b lim: 32 exec/s: 0 rss: 72Mb L: 32/32 MS: 5 CopyPart-InsertRepeatedBytes-CrossOver-InsertRepeatedBytes-CopyPart- 00:08:29.195 #123 NEW cov: 10984 ft: 14687 corp: 3/65b lim: 32 exec/s: 0 rss: 73Mb L: 32/32 MS: 1 ChangeBinInt- 00:08:29.453 #134 NEW cov: 10984 ft: 15994 corp: 4/97b lim: 32 exec/s: 0 rss: 74Mb L: 32/32 MS: 1 ChangeBit- 00:08:29.453 NEW_FUNC[1/1]: 0x1a56580 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:08:29.453 #140 NEW cov: 11001 ft: 16315 corp: 5/129b lim: 32 exec/s: 0 rss: 74Mb L: 32/32 MS: 1 ShuffleBytes- 00:08:29.711 #141 NEW cov: 11001 ft: 16554 corp: 6/161b lim: 32 exec/s: 141 rss: 74Mb L: 32/32 MS: 1 CopyPart- 00:08:29.969 #142 NEW cov: 11001 ft: 16956 corp: 7/193b lim: 32 exec/s: 142 rss: 74Mb L: 32/32 MS: 1 ShuffleBytes- 00:08:30.228 #143 NEW cov: 11001 ft: 17231 corp: 8/225b lim: 32 exec/s: 143 rss: 74Mb L: 32/32 MS: 1 ChangeBinInt- 00:08:30.228 #144 NEW cov: 11001 ft: 17637 corp: 9/257b lim: 32 exec/s: 144 rss: 74Mb L: 32/32 MS: 1 ShuffleBytes- 00:08:30.487 #145 NEW cov: 11001 ft: 18118 corp: 10/289b lim: 32 exec/s: 145 rss: 74Mb L: 32/32 MS: 1 ChangeBit- 00:08:30.746 #146 NEW cov: 11008 ft: 18371 corp: 11/321b lim: 32 exec/s: 146 rss: 74Mb L: 32/32 MS: 1 ChangeBit- 00:08:30.746 #157 NEW cov: 11008 ft: 18644 corp: 12/353b lim: 32 exec/s: 78 rss: 74Mb L: 32/32 MS: 1 CopyPart- 00:08:30.746 #157 DONE cov: 11008 ft: 18644 corp: 12/353b lim: 32 exec/s: 78 rss: 74Mb 00:08:30.746 Done 157 runs in 2 second(s) 00:08:31.004 [2024-07-25 09:23:43.557255] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-3/domain/2: disabling controller 00:08:31.004 09:23:43 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-3 /var/tmp/suppress_vfio_fuzz 00:08:31.004 09:23:43 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:31.004 09:23:43 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:31.004 09:23:43 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 4 1 0x1 00:08:31.004 09:23:43 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=4 00:08:31.004 09:23:43 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:08:31.004 09:23:43 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:08:31.004 09:23:43 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 00:08:31.005 09:23:43 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-4 00:08:31.005 09:23:43 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-4/domain/1 00:08:31.005 09:23:43 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-4/domain/2 00:08:31.005 09:23:43 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-4/fuzz_vfio_json.conf 00:08:31.005 09:23:43 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:08:31.263 09:23:43 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:08:31.263 09:23:43 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-4 /tmp/vfio-user-4/domain/1 /tmp/vfio-user-4/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 00:08:31.263 09:23:43 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-4/domain/1%; 00:08:31.263 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-4/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:08:31.263 09:23:43 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:31.263 09:23:43 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:08:31.263 09:23:43 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-4/domain/1 -c /tmp/vfio-user-4/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 -Y /tmp/vfio-user-4/domain/2 -r /tmp/vfio-user-4/spdk4.sock -Z 4 00:08:31.263 [2024-07-25 09:23:43.846571] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:31.263 [2024-07-25 09:23:43.846636] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid427682 ] 00:08:31.263 EAL: No free 2048 kB hugepages reported on node 1 00:08:31.263 [2024-07-25 09:23:43.909666] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:31.263 [2024-07-25 09:23:43.983430] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:31.522 INFO: Running with entropic power schedule (0xFF, 100). 00:08:31.522 INFO: Seed: 3671561431 00:08:31.522 INFO: Loaded 1 modules (356297 inline 8-bit counters): 356297 [0x2987e8c, 0x29dee55), 00:08:31.522 INFO: Loaded 1 PC tables (356297 PCs): 356297 [0x29dee58,0x2f4eae8), 00:08:31.522 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 00:08:31.522 INFO: A corpus is not provided, starting from an empty corpus 00:08:31.522 #2 INITED exec/s: 0 rss: 65Mb 00:08:31.522 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:31.522 This may also happen if the target rejected all inputs we tried so far 00:08:31.522 [2024-07-25 09:23:44.219754] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-4/domain/2: enabling controller 00:08:31.781 NEW_FUNC[1/660]: 0x485780 in fuzz_vfio_user_dma_unmap /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:144 00:08:31.781 NEW_FUNC[2/660]: 0x4893b0 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:08:31.781 #126 NEW cov: 10965 ft: 10936 corp: 2/33b lim: 32 exec/s: 0 rss: 71Mb L: 32/32 MS: 4 InsertRepeatedBytes-ShuffleBytes-ChangeBit-CopyPart- 00:08:32.039 #137 NEW cov: 10982 ft: 14550 corp: 3/65b lim: 32 exec/s: 0 rss: 72Mb L: 32/32 MS: 1 ChangeByte- 00:08:32.299 #138 NEW cov: 10982 ft: 15054 corp: 4/97b lim: 32 exec/s: 0 rss: 73Mb L: 32/32 MS: 1 ChangeBinInt- 00:08:32.299 NEW_FUNC[1/1]: 0x1a56580 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:08:32.299 #139 NEW cov: 10999 ft: 16276 corp: 5/129b lim: 32 exec/s: 0 rss: 73Mb L: 32/32 MS: 1 ChangeBit- 00:08:32.557 #140 NEW cov: 10999 ft: 16633 corp: 6/161b lim: 32 exec/s: 140 rss: 73Mb L: 32/32 MS: 1 CrossOver- 00:08:32.816 #146 NEW cov: 10999 ft: 17312 corp: 7/193b lim: 32 exec/s: 146 rss: 73Mb L: 32/32 MS: 1 CrossOver- 00:08:33.074 #147 NEW cov: 10999 ft: 17723 corp: 8/225b lim: 32 exec/s: 147 rss: 73Mb L: 32/32 MS: 1 CMP- DE: "\000\000\000\000\000\000\000\000"- 00:08:33.074 #158 NEW cov: 10999 ft: 17940 corp: 9/257b lim: 32 exec/s: 158 rss: 73Mb L: 32/32 MS: 1 ChangeBinInt- 00:08:33.333 #159 NEW cov: 11006 ft: 18229 corp: 10/289b lim: 32 exec/s: 159 rss: 73Mb L: 32/32 MS: 1 ChangeBit- 00:08:33.591 #165 NEW cov: 11006 ft: 18371 corp: 11/321b lim: 32 exec/s: 82 rss: 73Mb L: 32/32 MS: 1 ChangeBinInt- 00:08:33.591 #165 DONE cov: 11006 ft: 18371 corp: 11/321b lim: 32 exec/s: 82 rss: 73Mb 00:08:33.591 ###### Recommended dictionary. ###### 00:08:33.591 "\000\000\000\000\000\000\000\000" # Uses: 0 00:08:33.591 ###### End of recommended dictionary. ###### 00:08:33.591 Done 165 runs in 2 second(s) 00:08:33.591 [2024-07-25 09:23:46.224253] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-4/domain/2: disabling controller 00:08:33.851 09:23:46 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-4 /var/tmp/suppress_vfio_fuzz 00:08:33.851 09:23:46 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:33.851 09:23:46 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:33.851 09:23:46 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 5 1 0x1 00:08:33.851 09:23:46 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=5 00:08:33.851 09:23:46 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:08:33.851 09:23:46 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:08:33.851 09:23:46 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 00:08:33.851 09:23:46 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-5 00:08:33.851 09:23:46 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-5/domain/1 00:08:33.851 09:23:46 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-5/domain/2 00:08:33.851 09:23:46 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-5/fuzz_vfio_json.conf 00:08:33.851 09:23:46 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:08:33.851 09:23:46 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:08:33.851 09:23:46 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-5 /tmp/vfio-user-5/domain/1 /tmp/vfio-user-5/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 00:08:33.851 09:23:46 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-5/domain/1%; 00:08:33.851 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-5/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:08:33.851 09:23:46 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:33.851 09:23:46 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:08:33.851 09:23:46 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-5/domain/1 -c /tmp/vfio-user-5/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 -Y /tmp/vfio-user-5/domain/2 -r /tmp/vfio-user-5/spdk5.sock -Z 5 00:08:33.851 [2024-07-25 09:23:46.507086] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:33.851 [2024-07-25 09:23:46.507165] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid428091 ] 00:08:33.851 EAL: No free 2048 kB hugepages reported on node 1 00:08:33.851 [2024-07-25 09:23:46.570870] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:33.851 [2024-07-25 09:23:46.642753] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:34.110 INFO: Running with entropic power schedule (0xFF, 100). 00:08:34.110 INFO: Seed: 2032597577 00:08:34.110 INFO: Loaded 1 modules (356297 inline 8-bit counters): 356297 [0x2987e8c, 0x29dee55), 00:08:34.110 INFO: Loaded 1 PC tables (356297 PCs): 356297 [0x29dee58,0x2f4eae8), 00:08:34.110 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 00:08:34.110 INFO: A corpus is not provided, starting from an empty corpus 00:08:34.110 #2 INITED exec/s: 0 rss: 66Mb 00:08:34.110 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:34.110 This may also happen if the target rejected all inputs we tried so far 00:08:34.111 [2024-07-25 09:23:46.874920] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-5/domain/2: enabling controller 00:08:34.111 [2024-07-25 09:23:46.896121] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:34.111 [2024-07-25 09:23:46.896154] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:34.369 NEW_FUNC[1/660]: 0x486180 in fuzz_vfio_user_irq_set /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:171 00:08:34.369 NEW_FUNC[2/660]: 0x4893b0 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:08:34.369 #29 NEW cov: 10974 ft: 10840 corp: 2/14b lim: 13 exec/s: 0 rss: 72Mb L: 13/13 MS: 2 InsertRepeatedBytes-InsertRepeatedBytes- 00:08:34.369 [2024-07-25 09:23:47.135356] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:34.369 [2024-07-25 09:23:47.135397] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:34.629 NEW_FUNC[1/1]: 0x1717ca0 in nvme_qpair_is_admin_queue /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/./nvme_internal.h:1157 00:08:34.629 #45 NEW cov: 10991 ft: 13854 corp: 3/27b lim: 13 exec/s: 0 rss: 72Mb L: 13/13 MS: 1 ChangeBit- 00:08:34.629 [2024-07-25 09:23:47.261651] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:34.629 [2024-07-25 09:23:47.261683] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:34.629 #46 NEW cov: 10991 ft: 14329 corp: 4/40b lim: 13 exec/s: 0 rss: 74Mb L: 13/13 MS: 1 ChangeBit- 00:08:34.629 [2024-07-25 09:23:47.377780] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:34.629 [2024-07-25 09:23:47.377808] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:34.889 #47 NEW cov: 10991 ft: 14618 corp: 5/53b lim: 13 exec/s: 0 rss: 74Mb L: 13/13 MS: 1 ChangeBit- 00:08:34.889 [2024-07-25 09:23:47.494840] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:34.889 [2024-07-25 09:23:47.494870] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:34.889 #53 NEW cov: 10991 ft: 15098 corp: 6/66b lim: 13 exec/s: 0 rss: 74Mb L: 13/13 MS: 1 CrossOver- 00:08:34.889 [2024-07-25 09:23:47.610843] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:34.889 [2024-07-25 09:23:47.610872] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:34.889 NEW_FUNC[1/1]: 0x1a56580 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:08:34.889 #54 NEW cov: 11008 ft: 15402 corp: 7/79b lim: 13 exec/s: 0 rss: 74Mb L: 13/13 MS: 1 ChangeBinInt- 00:08:35.148 [2024-07-25 09:23:47.737921] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:35.148 [2024-07-25 09:23:47.737953] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:35.148 #60 NEW cov: 11008 ft: 15474 corp: 8/92b lim: 13 exec/s: 0 rss: 74Mb L: 13/13 MS: 1 ChangeByte- 00:08:35.148 [2024-07-25 09:23:47.854151] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:35.148 [2024-07-25 09:23:47.854182] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:35.148 #70 NEW cov: 11008 ft: 15686 corp: 9/105b lim: 13 exec/s: 70 rss: 74Mb L: 13/13 MS: 5 EraseBytes-InsertByte-CrossOver-ChangeBinInt-CrossOver- 00:08:35.406 [2024-07-25 09:23:47.982379] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:35.406 [2024-07-25 09:23:47.982410] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:35.406 #71 NEW cov: 11008 ft: 15985 corp: 10/118b lim: 13 exec/s: 71 rss: 74Mb L: 13/13 MS: 1 CopyPart- 00:08:35.406 [2024-07-25 09:23:48.098557] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:35.406 [2024-07-25 09:23:48.098587] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:35.406 #72 NEW cov: 11008 ft: 16173 corp: 11/131b lim: 13 exec/s: 72 rss: 74Mb L: 13/13 MS: 1 ShuffleBytes- 00:08:35.406 [2024-07-25 09:23:48.214725] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:35.406 [2024-07-25 09:23:48.214756] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:35.665 #73 NEW cov: 11008 ft: 16231 corp: 12/144b lim: 13 exec/s: 73 rss: 74Mb L: 13/13 MS: 1 ChangeBit- 00:08:35.665 [2024-07-25 09:23:48.340646] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:35.665 [2024-07-25 09:23:48.340675] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:35.665 #79 NEW cov: 11008 ft: 17290 corp: 13/157b lim: 13 exec/s: 79 rss: 74Mb L: 13/13 MS: 1 CrossOver- 00:08:35.665 [2024-07-25 09:23:48.466680] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:35.665 [2024-07-25 09:23:48.466707] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:35.924 #85 NEW cov: 11008 ft: 17378 corp: 14/170b lim: 13 exec/s: 85 rss: 74Mb L: 13/13 MS: 1 ChangeBit- 00:08:35.924 [2024-07-25 09:23:48.582666] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:35.924 [2024-07-25 09:23:48.582695] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:35.924 #86 NEW cov: 11015 ft: 17400 corp: 15/183b lim: 13 exec/s: 86 rss: 74Mb L: 13/13 MS: 1 ChangeBit- 00:08:35.924 [2024-07-25 09:23:48.698654] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:35.924 [2024-07-25 09:23:48.698683] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:36.182 #87 NEW cov: 11015 ft: 17475 corp: 16/196b lim: 13 exec/s: 87 rss: 74Mb L: 13/13 MS: 1 CrossOver- 00:08:36.182 [2024-07-25 09:23:48.815803] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:36.182 [2024-07-25 09:23:48.815831] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:36.182 #88 NEW cov: 11015 ft: 17493 corp: 17/209b lim: 13 exec/s: 44 rss: 74Mb L: 13/13 MS: 1 ChangeByte- 00:08:36.182 #88 DONE cov: 11015 ft: 17493 corp: 17/209b lim: 13 exec/s: 44 rss: 74Mb 00:08:36.182 Done 88 runs in 2 second(s) 00:08:36.182 [2024-07-25 09:23:48.909254] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-5/domain/2: disabling controller 00:08:36.441 09:23:49 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-5 /var/tmp/suppress_vfio_fuzz 00:08:36.441 09:23:49 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:36.441 09:23:49 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:36.441 09:23:49 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 6 1 0x1 00:08:36.441 09:23:49 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=6 00:08:36.441 09:23:49 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:08:36.441 09:23:49 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:08:36.441 09:23:49 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 00:08:36.441 09:23:49 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-6 00:08:36.441 09:23:49 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-6/domain/1 00:08:36.441 09:23:49 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-6/domain/2 00:08:36.441 09:23:49 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-6/fuzz_vfio_json.conf 00:08:36.441 09:23:49 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:08:36.441 09:23:49 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:08:36.441 09:23:49 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-6 /tmp/vfio-user-6/domain/1 /tmp/vfio-user-6/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 00:08:36.441 09:23:49 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-6/domain/1%; 00:08:36.441 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-6/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:08:36.441 09:23:49 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:36.441 09:23:49 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:08:36.441 09:23:49 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-6/domain/1 -c /tmp/vfio-user-6/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 -Y /tmp/vfio-user-6/domain/2 -r /tmp/vfio-user-6/spdk6.sock -Z 6 00:08:36.441 [2024-07-25 09:23:49.191800] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:36.441 [2024-07-25 09:23:49.191883] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid428542 ] 00:08:36.441 EAL: No free 2048 kB hugepages reported on node 1 00:08:36.699 [2024-07-25 09:23:49.256882] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:36.699 [2024-07-25 09:23:49.335349] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:36.958 INFO: Running with entropic power schedule (0xFF, 100). 00:08:36.958 INFO: Seed: 435627448 00:08:36.958 INFO: Loaded 1 modules (356297 inline 8-bit counters): 356297 [0x2987e8c, 0x29dee55), 00:08:36.958 INFO: Loaded 1 PC tables (356297 PCs): 356297 [0x29dee58,0x2f4eae8), 00:08:36.958 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 00:08:36.958 INFO: A corpus is not provided, starting from an empty corpus 00:08:36.958 #2 INITED exec/s: 0 rss: 65Mb 00:08:36.958 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:36.958 This may also happen if the target rejected all inputs we tried so far 00:08:36.958 [2024-07-25 09:23:49.572390] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-6/domain/2: enabling controller 00:08:36.958 [2024-07-25 09:23:49.633090] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:36.958 [2024-07-25 09:23:49.633120] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:37.216 NEW_FUNC[1/661]: 0x486e70 in fuzz_vfio_user_set_msix /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:190 00:08:37.216 NEW_FUNC[2/661]: 0x4893b0 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:08:37.216 #11 NEW cov: 10969 ft: 10931 corp: 2/10b lim: 9 exec/s: 0 rss: 72Mb L: 9/9 MS: 4 ChangeByte-CrossOver-InsertRepeatedBytes-InsertByte- 00:08:37.216 [2024-07-25 09:23:49.928341] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:37.216 [2024-07-25 09:23:49.928382] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:37.473 #12 NEW cov: 10983 ft: 14147 corp: 3/19b lim: 9 exec/s: 0 rss: 73Mb L: 9/9 MS: 1 CopyPart- 00:08:37.473 [2024-07-25 09:23:50.113816] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:37.473 [2024-07-25 09:23:50.113858] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:37.473 #16 NEW cov: 10983 ft: 15609 corp: 4/28b lim: 9 exec/s: 0 rss: 74Mb L: 9/9 MS: 4 ShuffleBytes-InsertRepeatedBytes-EraseBytes-InsertRepeatedBytes- 00:08:37.730 [2024-07-25 09:23:50.311426] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:37.730 [2024-07-25 09:23:50.311456] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:37.730 NEW_FUNC[1/1]: 0x1a56580 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:08:37.730 #17 NEW cov: 11000 ft: 16457 corp: 5/37b lim: 9 exec/s: 0 rss: 74Mb L: 9/9 MS: 1 CrossOver- 00:08:37.730 [2024-07-25 09:23:50.489280] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:37.730 [2024-07-25 09:23:50.489308] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:37.987 #23 NEW cov: 11000 ft: 17071 corp: 6/46b lim: 9 exec/s: 23 rss: 74Mb L: 9/9 MS: 1 ShuffleBytes- 00:08:37.987 [2024-07-25 09:23:50.670868] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:37.987 [2024-07-25 09:23:50.670896] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:37.987 #24 NEW cov: 11000 ft: 17359 corp: 7/55b lim: 9 exec/s: 24 rss: 74Mb L: 9/9 MS: 1 ChangeByte- 00:08:38.245 [2024-07-25 09:23:50.853673] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:38.245 [2024-07-25 09:23:50.853699] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:38.245 #25 NEW cov: 11000 ft: 17941 corp: 8/64b lim: 9 exec/s: 25 rss: 74Mb L: 9/9 MS: 1 CrossOver- 00:08:38.245 [2024-07-25 09:23:51.039094] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:38.245 [2024-07-25 09:23:51.039120] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:38.503 #26 NEW cov: 11000 ft: 18075 corp: 9/73b lim: 9 exec/s: 26 rss: 74Mb L: 9/9 MS: 1 ChangeBit- 00:08:38.503 [2024-07-25 09:23:51.213237] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:38.503 [2024-07-25 09:23:51.213264] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:38.761 #27 NEW cov: 11000 ft: 18486 corp: 10/82b lim: 9 exec/s: 27 rss: 74Mb L: 9/9 MS: 1 ChangeBinInt- 00:08:38.761 [2024-07-25 09:23:51.386802] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:38.761 [2024-07-25 09:23:51.386828] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:38.761 #28 NEW cov: 11007 ft: 18542 corp: 11/91b lim: 9 exec/s: 28 rss: 74Mb L: 9/9 MS: 1 ChangeBit- 00:08:38.761 [2024-07-25 09:23:51.564186] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:38.761 [2024-07-25 09:23:51.564212] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:39.019 #29 NEW cov: 11007 ft: 18722 corp: 12/100b lim: 9 exec/s: 14 rss: 74Mb L: 9/9 MS: 1 ShuffleBytes- 00:08:39.019 #29 DONE cov: 11007 ft: 18722 corp: 12/100b lim: 9 exec/s: 14 rss: 74Mb 00:08:39.019 Done 29 runs in 2 second(s) 00:08:39.019 [2024-07-25 09:23:51.689259] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-6/domain/2: disabling controller 00:08:39.277 09:23:51 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-6 /var/tmp/suppress_vfio_fuzz 00:08:39.277 09:23:51 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:39.278 09:23:51 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:39.278 09:23:51 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:08:39.278 00:08:39.278 real 0m19.182s 00:08:39.278 user 0m27.405s 00:08:39.278 sys 0m1.595s 00:08:39.278 09:23:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:39.278 09:23:51 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:08:39.278 ************************************ 00:08:39.278 END TEST vfio_llvm_fuzz 00:08:39.278 ************************************ 00:08:39.278 09:23:51 llvm_fuzz -- fuzz/llvm.sh@67 -- # [[ 1 -eq 0 ]] 00:08:39.278 00:08:39.278 real 1m22.347s 00:08:39.278 user 2m12.321s 00:08:39.278 sys 0m7.667s 00:08:39.278 09:23:51 llvm_fuzz -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:39.278 09:23:51 llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:08:39.278 ************************************ 00:08:39.278 END TEST llvm_fuzz 00:08:39.278 ************************************ 00:08:39.278 09:23:52 -- spdk/autotest.sh@379 -- # [[ 0 -eq 1 ]] 00:08:39.278 09:23:52 -- spdk/autotest.sh@384 -- # trap - SIGINT SIGTERM EXIT 00:08:39.278 09:23:52 -- spdk/autotest.sh@386 -- # timing_enter post_cleanup 00:08:39.278 09:23:52 -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:39.278 09:23:52 -- common/autotest_common.sh@10 -- # set +x 00:08:39.278 09:23:52 -- spdk/autotest.sh@387 -- # autotest_cleanup 00:08:39.278 09:23:52 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:08:39.278 09:23:52 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:08:39.278 09:23:52 -- common/autotest_common.sh@10 -- # set +x 00:08:43.467 INFO: APP EXITING 00:08:43.467 INFO: killing all VMs 00:08:43.467 INFO: killing vhost app 00:08:43.467 INFO: EXIT DONE 00:08:46.752 Waiting for block devices as requested 00:08:46.752 0000:dd:00.0 (8086 0a54): vfio-pci -> nvme 00:08:46.752 0000:df:00.0 (8086 0a54): vfio-pci -> nvme 00:08:46.752 0000:de:00.0 (8086 0953): vfio-pci -> nvme 00:08:47.010 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:08:47.010 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:08:47.010 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:08:47.010 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:08:47.269 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:08:47.269 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:08:47.269 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:08:47.269 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:08:47.528 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:08:47.528 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:08:47.528 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:08:47.528 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:08:47.787 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:08:47.787 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:08:47.787 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:08:48.045 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:08:48.045 0000:dc:00.0 (8086 0953): vfio-pci -> nvme 00:08:51.332 Cleaning 00:08:51.332 Removing: /dev/shm/spdk_tgt_trace.pid403183 00:08:51.332 Removing: /var/run/dpdk/spdk_pid399115 00:08:51.332 Removing: /var/run/dpdk/spdk_pid401307 00:08:51.332 Removing: /var/run/dpdk/spdk_pid403183 00:08:51.332 Removing: /var/run/dpdk/spdk_pid403783 00:08:51.332 Removing: /var/run/dpdk/spdk_pid404660 00:08:51.332 Removing: /var/run/dpdk/spdk_pid404877 00:08:51.332 Removing: /var/run/dpdk/spdk_pid405790 00:08:51.332 Removing: /var/run/dpdk/spdk_pid405797 00:08:51.332 Removing: /var/run/dpdk/spdk_pid406151 00:08:51.332 Removing: /var/run/dpdk/spdk_pid406412 00:08:51.332 Removing: /var/run/dpdk/spdk_pid406686 00:08:51.332 Removing: /var/run/dpdk/spdk_pid406973 00:08:51.332 Removing: /var/run/dpdk/spdk_pid407275 00:08:51.332 Removing: /var/run/dpdk/spdk_pid407491 00:08:51.332 Removing: /var/run/dpdk/spdk_pid407715 00:08:51.332 Removing: /var/run/dpdk/spdk_pid407979 00:08:51.332 Removing: /var/run/dpdk/spdk_pid408871 00:08:51.332 Removing: /var/run/dpdk/spdk_pid411665 00:08:51.332 Removing: /var/run/dpdk/spdk_pid411918 00:08:51.332 Removing: /var/run/dpdk/spdk_pid412158 00:08:51.332 Removing: /var/run/dpdk/spdk_pid412345 00:08:51.332 Removing: /var/run/dpdk/spdk_pid412642 00:08:51.332 Removing: /var/run/dpdk/spdk_pid412855 00:08:51.332 Removing: /var/run/dpdk/spdk_pid413328 00:08:51.332 Removing: /var/run/dpdk/spdk_pid413334 00:08:51.332 Removing: /var/run/dpdk/spdk_pid413580 00:08:51.332 Removing: /var/run/dpdk/spdk_pid413710 00:08:51.332 Removing: /var/run/dpdk/spdk_pid413843 00:08:51.332 Removing: /var/run/dpdk/spdk_pid414041 00:08:51.332 Removing: /var/run/dpdk/spdk_pid414367 00:08:51.332 Removing: /var/run/dpdk/spdk_pid414601 00:08:51.332 Removing: /var/run/dpdk/spdk_pid414839 00:08:51.332 Removing: /var/run/dpdk/spdk_pid415108 00:08:51.332 Removing: /var/run/dpdk/spdk_pid415610 00:08:51.332 Removing: /var/run/dpdk/spdk_pid415976 00:08:51.332 Removing: /var/run/dpdk/spdk_pid416410 00:08:51.332 Removing: /var/run/dpdk/spdk_pid416839 00:08:51.332 Removing: /var/run/dpdk/spdk_pid417278 00:08:51.332 Removing: /var/run/dpdk/spdk_pid417707 00:08:51.332 Removing: /var/run/dpdk/spdk_pid418136 00:08:51.332 Removing: /var/run/dpdk/spdk_pid418566 00:08:51.332 Removing: /var/run/dpdk/spdk_pid418889 00:08:51.332 Removing: /var/run/dpdk/spdk_pid419241 00:08:51.332 Removing: /var/run/dpdk/spdk_pid419673 00:08:51.332 Removing: /var/run/dpdk/spdk_pid420108 00:08:51.332 Removing: /var/run/dpdk/spdk_pid420537 00:08:51.332 Removing: /var/run/dpdk/spdk_pid420972 00:08:51.332 Removing: /var/run/dpdk/spdk_pid421402 00:08:51.332 Removing: /var/run/dpdk/spdk_pid421755 00:08:51.332 Removing: /var/run/dpdk/spdk_pid422080 00:08:51.332 Removing: /var/run/dpdk/spdk_pid422509 00:08:51.332 Removing: /var/run/dpdk/spdk_pid422946 00:08:51.332 Removing: /var/run/dpdk/spdk_pid423374 00:08:51.332 Removing: /var/run/dpdk/spdk_pid423812 00:08:51.332 Removing: /var/run/dpdk/spdk_pid424241 00:08:51.332 Removing: /var/run/dpdk/spdk_pid424677 00:08:51.332 Removing: /var/run/dpdk/spdk_pid424993 00:08:51.332 Removing: /var/run/dpdk/spdk_pid425347 00:08:51.332 Removing: /var/run/dpdk/spdk_pid425912 00:08:51.332 Removing: /var/run/dpdk/spdk_pid426305 00:08:51.332 Removing: /var/run/dpdk/spdk_pid426736 00:08:51.332 Removing: /var/run/dpdk/spdk_pid427172 00:08:51.332 Removing: /var/run/dpdk/spdk_pid427682 00:08:51.332 Removing: /var/run/dpdk/spdk_pid428091 00:08:51.332 Removing: /var/run/dpdk/spdk_pid428542 00:08:51.332 Clean 00:08:51.332 09:24:04 -- common/autotest_common.sh@1451 -- # return 0 00:08:51.332 09:24:04 -- spdk/autotest.sh@388 -- # timing_exit post_cleanup 00:08:51.332 09:24:04 -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:51.332 09:24:04 -- common/autotest_common.sh@10 -- # set +x 00:08:51.332 09:24:04 -- spdk/autotest.sh@390 -- # timing_exit autotest 00:08:51.332 09:24:04 -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:51.332 09:24:04 -- common/autotest_common.sh@10 -- # set +x 00:08:51.332 09:24:04 -- spdk/autotest.sh@391 -- # chmod a+r /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/timing.txt 00:08:51.590 09:24:04 -- spdk/autotest.sh@393 -- # [[ -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/udev.log ]] 00:08:51.590 09:24:04 -- spdk/autotest.sh@393 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/udev.log 00:08:51.590 09:24:04 -- spdk/autotest.sh@395 -- # hash lcov 00:08:51.590 09:24:04 -- spdk/autotest.sh@395 -- # [[ CC_TYPE=clang == *\c\l\a\n\g* ]] 00:08:51.590 09:24:04 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:08:51.590 09:24:04 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:08:51.590 09:24:04 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:51.590 09:24:04 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:51.590 09:24:04 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:51.590 09:24:04 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:51.590 09:24:04 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:51.590 09:24:04 -- paths/export.sh@5 -- $ export PATH 00:08:51.590 09:24:04 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:51.590 09:24:04 -- common/autobuild_common.sh@446 -- $ out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output 00:08:51.590 09:24:04 -- common/autobuild_common.sh@447 -- $ date +%s 00:08:51.590 09:24:04 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721892244.XXXXXX 00:08:51.590 09:24:04 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721892244.FDnbwS 00:08:51.590 09:24:04 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:08:51.590 09:24:04 -- common/autobuild_common.sh@453 -- $ '[' -n '' ']' 00:08:51.590 09:24:04 -- common/autobuild_common.sh@456 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/' 00:08:51.590 09:24:04 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/xnvme --exclude /tmp' 00:08:51.590 09:24:04 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:08:51.590 09:24:04 -- common/autobuild_common.sh@463 -- $ get_config_params 00:08:51.590 09:24:04 -- common/autotest_common.sh@398 -- $ xtrace_disable 00:08:51.590 09:24:04 -- common/autotest_common.sh@10 -- $ set +x 00:08:51.590 09:24:04 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:08:51.590 09:24:04 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:08:51.590 09:24:04 -- pm/common@17 -- $ local monitor 00:08:51.590 09:24:04 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:08:51.590 09:24:04 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:08:51.590 09:24:04 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:08:51.590 09:24:04 -- pm/common@21 -- $ date +%s 00:08:51.590 09:24:04 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:08:51.590 09:24:04 -- pm/common@21 -- $ date +%s 00:08:51.590 09:24:04 -- pm/common@25 -- $ sleep 1 00:08:51.590 09:24:04 -- pm/common@21 -- $ date +%s 00:08:51.590 09:24:04 -- pm/common@21 -- $ date +%s 00:08:51.590 09:24:04 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721892244 00:08:51.590 09:24:04 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721892244 00:08:51.590 09:24:04 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721892244 00:08:51.590 09:24:04 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721892244 00:08:51.590 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721892244_collect-cpu-load.pm.log 00:08:51.591 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721892244_collect-vmstat.pm.log 00:08:51.591 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721892244_collect-cpu-temp.pm.log 00:08:51.591 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721892244_collect-bmc-pm.bmc.pm.log 00:08:52.524 09:24:05 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:08:52.524 09:24:05 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j88 00:08:52.524 09:24:05 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:08:52.524 09:24:05 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:08:52.524 09:24:05 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:08:52.524 09:24:05 -- spdk/autopackage.sh@19 -- $ timing_finish 00:08:52.524 09:24:05 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:08:52.524 09:24:05 -- common/autotest_common.sh@737 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:08:52.524 09:24:05 -- common/autotest_common.sh@739 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/timing.txt 00:08:52.524 09:24:05 -- spdk/autopackage.sh@20 -- $ exit 0 00:08:52.524 09:24:05 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:08:52.524 09:24:05 -- pm/common@29 -- $ signal_monitor_resources TERM 00:08:52.524 09:24:05 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:08:52.524 09:24:05 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:08:52.524 09:24:05 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:08:52.524 09:24:05 -- pm/common@44 -- $ pid=435359 00:08:52.524 09:24:05 -- pm/common@50 -- $ kill -TERM 435359 00:08:52.524 09:24:05 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:08:52.524 09:24:05 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:08:52.524 09:24:05 -- pm/common@44 -- $ pid=435362 00:08:52.524 09:24:05 -- pm/common@50 -- $ kill -TERM 435362 00:08:52.524 09:24:05 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:08:52.524 09:24:05 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:08:52.524 09:24:05 -- pm/common@44 -- $ pid=435363 00:08:52.525 09:24:05 -- pm/common@50 -- $ kill -TERM 435363 00:08:52.525 09:24:05 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:08:52.525 09:24:05 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:08:52.525 09:24:05 -- pm/common@44 -- $ pid=435405 00:08:52.525 09:24:05 -- pm/common@50 -- $ sudo -E kill -TERM 435405 00:08:52.782 + [[ -n 288167 ]] 00:08:52.782 + sudo kill 288167 00:08:52.790 [Pipeline] } 00:08:52.803 [Pipeline] // stage 00:08:52.807 [Pipeline] } 00:08:52.824 [Pipeline] // timeout 00:08:52.829 [Pipeline] } 00:08:52.848 [Pipeline] // catchError 00:08:52.853 [Pipeline] } 00:08:52.867 [Pipeline] // wrap 00:08:52.874 [Pipeline] } 00:08:52.887 [Pipeline] // catchError 00:08:52.895 [Pipeline] stage 00:08:52.897 [Pipeline] { (Epilogue) 00:08:52.910 [Pipeline] catchError 00:08:52.912 [Pipeline] { 00:08:52.925 [Pipeline] echo 00:08:52.927 Cleanup processes 00:08:52.932 [Pipeline] sh 00:08:53.212 + sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:08:53.212 435604 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/sdr.cache 00:08:53.212 436241 sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:08:53.226 [Pipeline] sh 00:08:53.509 ++ sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:08:53.509 ++ grep -v 'sudo pgrep' 00:08:53.509 ++ awk '{print $1}' 00:08:53.509 + sudo kill -9 435604 00:08:53.522 [Pipeline] sh 00:08:53.853 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:08:54.855 [Pipeline] sh 00:08:55.147 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:08:55.147 Artifacts sizes are good 00:08:55.161 [Pipeline] archiveArtifacts 00:08:55.168 Archiving artifacts 00:08:55.249 [Pipeline] sh 00:08:55.531 + sudo chown -R sys_sgci /var/jenkins/workspace/short-fuzz-phy-autotest 00:08:55.545 [Pipeline] cleanWs 00:08:55.554 [WS-CLEANUP] Deleting project workspace... 00:08:55.554 [WS-CLEANUP] Deferred wipeout is used... 00:08:55.560 [WS-CLEANUP] done 00:08:55.564 [Pipeline] } 00:08:55.585 [Pipeline] // catchError 00:08:55.595 [Pipeline] sh 00:08:55.876 + logger -p user.info -t JENKINS-CI 00:08:55.884 [Pipeline] } 00:08:55.897 [Pipeline] // stage 00:08:55.902 [Pipeline] } 00:08:55.921 [Pipeline] // node 00:08:55.926 [Pipeline] End of Pipeline 00:08:55.953 Finished: SUCCESS