00:00:00.001 Started by upstream project "autotest-per-patch" build number 127113 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.075 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/short-fuzz-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.075 The recommended git tool is: git 00:00:00.076 using credential 00000000-0000-0000-0000-000000000002 00:00:00.077 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/short-fuzz-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.116 Fetching changes from the remote Git repository 00:00:00.117 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.153 Using shallow fetch with depth 1 00:00:00.153 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.153 > git --version # timeout=10 00:00:00.184 > git --version # 'git version 2.39.2' 00:00:00.184 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.204 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.204 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.548 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.561 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.575 Checking out Revision f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08 (FETCH_HEAD) 00:00:04.575 > git config core.sparsecheckout # timeout=10 00:00:04.589 > git read-tree -mu HEAD # timeout=10 00:00:04.607 > git checkout -f f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08 # timeout=5 00:00:04.641 Commit message: "spdk-abi-per-patch: fix check-so-deps-docker-autotest parameters" 00:00:04.641 > git rev-list --no-walk f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08 # timeout=10 00:00:04.727 [Pipeline] Start of Pipeline 00:00:04.739 [Pipeline] library 00:00:04.741 Loading library shm_lib@master 00:00:04.741 Library shm_lib@master is cached. Copying from home. 00:00:04.754 [Pipeline] node 00:00:04.766 Running on WFP13 in /var/jenkins/workspace/short-fuzz-phy-autotest 00:00:04.768 [Pipeline] { 00:00:04.777 [Pipeline] catchError 00:00:04.778 [Pipeline] { 00:00:04.788 [Pipeline] wrap 00:00:04.796 [Pipeline] { 00:00:04.802 [Pipeline] stage 00:00:04.803 [Pipeline] { (Prologue) 00:00:04.973 [Pipeline] sh 00:00:05.257 + logger -p user.info -t JENKINS-CI 00:00:05.271 [Pipeline] echo 00:00:05.272 Node: WFP13 00:00:05.279 [Pipeline] sh 00:00:05.611 [Pipeline] setCustomBuildProperty 00:00:05.623 [Pipeline] echo 00:00:05.625 Cleanup processes 00:00:05.631 [Pipeline] sh 00:00:05.911 + sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:05.911 342883 sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:05.922 [Pipeline] sh 00:00:06.204 ++ sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:06.204 ++ grep -v 'sudo pgrep' 00:00:06.204 ++ awk '{print $1}' 00:00:06.204 + sudo kill -9 00:00:06.204 + true 00:00:06.219 [Pipeline] cleanWs 00:00:06.228 [WS-CLEANUP] Deleting project workspace... 00:00:06.228 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.233 [WS-CLEANUP] done 00:00:06.238 [Pipeline] setCustomBuildProperty 00:00:06.254 [Pipeline] sh 00:00:06.537 + sudo git config --global --replace-all safe.directory '*' 00:00:06.616 [Pipeline] httpRequest 00:00:06.646 [Pipeline] echo 00:00:06.648 Sorcerer 10.211.164.101 is alive 00:00:06.657 [Pipeline] httpRequest 00:00:06.661 HttpMethod: GET 00:00:06.662 URL: http://10.211.164.101/packages/jbp_f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08.tar.gz 00:00:06.662 Sending request to url: http://10.211.164.101/packages/jbp_f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08.tar.gz 00:00:06.673 Response Code: HTTP/1.1 200 OK 00:00:06.673 Success: Status code 200 is in the accepted range: 200,404 00:00:06.674 Saving response body to /var/jenkins/workspace/short-fuzz-phy-autotest/jbp_f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08.tar.gz 00:00:13.180 [Pipeline] sh 00:00:13.462 + tar --no-same-owner -xf jbp_f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08.tar.gz 00:00:13.476 [Pipeline] httpRequest 00:00:13.513 [Pipeline] echo 00:00:13.515 Sorcerer 10.211.164.101 is alive 00:00:13.522 [Pipeline] httpRequest 00:00:13.527 HttpMethod: GET 00:00:13.528 URL: http://10.211.164.101/packages/spdk_f41dbc2357192a659babd4b9e7d7cd5809ba98d9.tar.gz 00:00:13.528 Sending request to url: http://10.211.164.101/packages/spdk_f41dbc2357192a659babd4b9e7d7cd5809ba98d9.tar.gz 00:00:13.548 Response Code: HTTP/1.1 200 OK 00:00:13.548 Success: Status code 200 is in the accepted range: 200,404 00:00:13.549 Saving response body to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk_f41dbc2357192a659babd4b9e7d7cd5809ba98d9.tar.gz 00:00:53.413 [Pipeline] sh 00:00:53.701 + tar --no-same-owner -xf spdk_f41dbc2357192a659babd4b9e7d7cd5809ba98d9.tar.gz 00:00:56.242 [Pipeline] sh 00:00:56.523 + git -C spdk log --oneline -n5 00:00:56.524 f41dbc235 nvme: always specify CC_CSS_NVM when CAP_CSS_IOCS is not set 00:00:56.524 8ee2672c4 test/bdev: Add test for resized RAID with superblock 00:00:56.524 19f5787c8 raid: skip configured base bdevs in sb examine 00:00:56.524 3b9baa5f8 bdev/raid1: Support resize when increasing the size of base bdevs 00:00:56.524 25a9ccb98 nvme/fio_plugin: update the way ruhs descriptors are fetched 00:00:56.538 [Pipeline] sh 00:00:56.819 + ip --json address 00:00:56.833 [Pipeline] readJSON 00:00:56.852 [Pipeline] echo 00:00:56.854 NIC with Beetle address is already setup (192.168.10.10) 00:00:56.860 [Pipeline] withCredentials 00:00:56.874 Masking supported pattern matches of $beetle_key 00:00:56.876 [Pipeline] { 00:00:56.884 [Pipeline] sh 00:00:57.163 + ssh -i **** -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -o ConnectionAttempts=5 root@192.168.10.11 'for gpio in {0..10}; do Beetle --SetGpio "$gpio" HIGH; done' 00:00:57.421 Warning: Permanently added '192.168.10.11' (ED25519) to the list of known hosts. 00:00:59.963 [Pipeline] } 00:00:59.988 [Pipeline] // withCredentials 00:00:59.994 [Pipeline] } 00:01:00.013 [Pipeline] // stage 00:01:00.022 [Pipeline] stage 00:01:00.025 [Pipeline] { (Prepare) 00:01:00.042 [Pipeline] writeFile 00:01:00.056 [Pipeline] sh 00:01:00.339 + logger -p user.info -t JENKINS-CI 00:01:00.354 [Pipeline] sh 00:01:00.635 + logger -p user.info -t JENKINS-CI 00:01:00.648 [Pipeline] sh 00:01:00.931 + cat autorun-spdk.conf 00:01:00.931 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:00.931 SPDK_TEST_FUZZER_SHORT=1 00:01:00.931 SPDK_TEST_FUZZER=1 00:01:00.931 SPDK_RUN_UBSAN=1 00:01:00.938 RUN_NIGHTLY=0 00:01:00.944 [Pipeline] readFile 00:01:00.976 [Pipeline] withEnv 00:01:00.978 [Pipeline] { 00:01:00.994 [Pipeline] sh 00:01:01.278 + set -ex 00:01:01.278 + [[ -f /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf ]] 00:01:01.278 + source /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf 00:01:01.278 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:01.278 ++ SPDK_TEST_FUZZER_SHORT=1 00:01:01.278 ++ SPDK_TEST_FUZZER=1 00:01:01.278 ++ SPDK_RUN_UBSAN=1 00:01:01.278 ++ RUN_NIGHTLY=0 00:01:01.278 + case $SPDK_TEST_NVMF_NICS in 00:01:01.278 + DRIVERS= 00:01:01.278 + [[ -n '' ]] 00:01:01.278 + exit 0 00:01:01.288 [Pipeline] } 00:01:01.306 [Pipeline] // withEnv 00:01:01.312 [Pipeline] } 00:01:01.330 [Pipeline] // stage 00:01:01.341 [Pipeline] catchError 00:01:01.343 [Pipeline] { 00:01:01.360 [Pipeline] timeout 00:01:01.360 Timeout set to expire in 30 min 00:01:01.362 [Pipeline] { 00:01:01.379 [Pipeline] stage 00:01:01.381 [Pipeline] { (Tests) 00:01:01.397 [Pipeline] sh 00:01:01.680 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/short-fuzz-phy-autotest 00:01:01.680 ++ readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest 00:01:01.680 + DIR_ROOT=/var/jenkins/workspace/short-fuzz-phy-autotest 00:01:01.680 + [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest ]] 00:01:01.680 + DIR_SPDK=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:01:01.680 + DIR_OUTPUT=/var/jenkins/workspace/short-fuzz-phy-autotest/output 00:01:01.680 + [[ -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk ]] 00:01:01.680 + [[ ! -d /var/jenkins/workspace/short-fuzz-phy-autotest/output ]] 00:01:01.680 + mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/output 00:01:01.680 + [[ -d /var/jenkins/workspace/short-fuzz-phy-autotest/output ]] 00:01:01.680 + [[ short-fuzz-phy-autotest == pkgdep-* ]] 00:01:01.680 + cd /var/jenkins/workspace/short-fuzz-phy-autotest 00:01:01.680 + source /etc/os-release 00:01:01.680 ++ NAME='Fedora Linux' 00:01:01.680 ++ VERSION='38 (Cloud Edition)' 00:01:01.680 ++ ID=fedora 00:01:01.680 ++ VERSION_ID=38 00:01:01.680 ++ VERSION_CODENAME= 00:01:01.680 ++ PLATFORM_ID=platform:f38 00:01:01.680 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:01.680 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:01.680 ++ LOGO=fedora-logo-icon 00:01:01.680 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:01.680 ++ HOME_URL=https://fedoraproject.org/ 00:01:01.680 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:01.680 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:01.680 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:01.680 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:01.680 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:01.680 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:01.680 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:01.680 ++ SUPPORT_END=2024-05-14 00:01:01.680 ++ VARIANT='Cloud Edition' 00:01:01.680 ++ VARIANT_ID=cloud 00:01:01.680 + uname -a 00:01:01.680 Linux spdk-wfp-13 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:01.680 + sudo /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh status 00:01:04.977 Hugepages 00:01:04.977 node hugesize free / total 00:01:04.977 node0 1048576kB 0 / 0 00:01:04.977 node0 2048kB 0 / 0 00:01:04.977 node1 1048576kB 0 / 0 00:01:04.977 node1 2048kB 0 / 0 00:01:04.977 00:01:04.977 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:04.977 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:01:04.977 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:01:04.977 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:01:04.977 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:01:04.977 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:01:04.977 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:01:04.977 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:01:04.977 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:01:04.977 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:01:04.977 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:01:04.977 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:01:04.977 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:01:04.977 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:01:04.977 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:01:04.977 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:01:04.977 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:01:04.977 NVMe 0000:dc:00.0 8086 0953 1 nvme nvme3 nvme3n1 00:01:04.977 NVMe 0000:dd:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:01:04.977 NVMe 0000:de:00.0 8086 0953 1 nvme nvme2 nvme2n1 00:01:05.293 NVMe 0000:df:00.0 8086 0a54 1 nvme nvme1 nvme1n1 00:01:05.293 + rm -f /tmp/spdk-ld-path 00:01:05.293 + source autorun-spdk.conf 00:01:05.293 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:05.293 ++ SPDK_TEST_FUZZER_SHORT=1 00:01:05.294 ++ SPDK_TEST_FUZZER=1 00:01:05.294 ++ SPDK_RUN_UBSAN=1 00:01:05.294 ++ RUN_NIGHTLY=0 00:01:05.294 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:05.294 + [[ -n '' ]] 00:01:05.294 + sudo git config --global --add safe.directory /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:01:05.294 + for M in /var/spdk/build-*-manifest.txt 00:01:05.294 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:05.294 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/short-fuzz-phy-autotest/output/ 00:01:05.294 + for M in /var/spdk/build-*-manifest.txt 00:01:05.294 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:05.294 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/short-fuzz-phy-autotest/output/ 00:01:05.294 ++ uname 00:01:05.294 + [[ Linux == \L\i\n\u\x ]] 00:01:05.294 + sudo dmesg -T 00:01:05.294 + sudo dmesg --clear 00:01:05.294 + dmesg_pid=344566 00:01:05.294 + [[ Fedora Linux == FreeBSD ]] 00:01:05.294 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:05.294 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:05.294 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:05.294 + [[ -x /usr/src/fio-static/fio ]] 00:01:05.294 + export FIO_BIN=/usr/src/fio-static/fio 00:01:05.294 + FIO_BIN=/usr/src/fio-static/fio 00:01:05.294 + sudo dmesg -Tw 00:01:05.294 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\s\h\o\r\t\-\f\u\z\z\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:05.294 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:05.294 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:05.294 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:05.294 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:05.294 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:05.294 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:05.294 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:05.294 + spdk/autorun.sh /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf 00:01:05.294 Test configuration: 00:01:05.294 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:05.294 SPDK_TEST_FUZZER_SHORT=1 00:01:05.294 SPDK_TEST_FUZZER=1 00:01:05.294 SPDK_RUN_UBSAN=1 00:01:05.294 RUN_NIGHTLY=0 22:40:03 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:01:05.294 22:40:03 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:05.294 22:40:03 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:05.294 22:40:03 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:05.294 22:40:03 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:05.294 22:40:03 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:05.294 22:40:03 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:05.294 22:40:03 -- paths/export.sh@5 -- $ export PATH 00:01:05.294 22:40:03 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:05.294 22:40:03 -- common/autobuild_common.sh@446 -- $ out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output 00:01:05.294 22:40:03 -- common/autobuild_common.sh@447 -- $ date +%s 00:01:05.294 22:40:03 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721853603.XXXXXX 00:01:05.294 22:40:03 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721853603.TPi8au 00:01:05.294 22:40:03 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:01:05.294 22:40:03 -- common/autobuild_common.sh@453 -- $ '[' -n '' ']' 00:01:05.294 22:40:03 -- common/autobuild_common.sh@456 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/' 00:01:05.294 22:40:03 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:05.294 22:40:03 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:05.294 22:40:03 -- common/autobuild_common.sh@463 -- $ get_config_params 00:01:05.294 22:40:03 -- common/autotest_common.sh@398 -- $ xtrace_disable 00:01:05.294 22:40:03 -- common/autotest_common.sh@10 -- $ set +x 00:01:05.294 22:40:03 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:01:05.294 22:40:03 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:01:05.294 22:40:03 -- pm/common@17 -- $ local monitor 00:01:05.294 22:40:03 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:05.294 22:40:03 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:05.294 22:40:03 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:05.294 22:40:03 -- pm/common@21 -- $ date +%s 00:01:05.294 22:40:03 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:05.294 22:40:03 -- pm/common@21 -- $ date +%s 00:01:05.294 22:40:03 -- pm/common@25 -- $ sleep 1 00:01:05.294 22:40:03 -- pm/common@21 -- $ date +%s 00:01:05.294 22:40:03 -- pm/common@21 -- $ date +%s 00:01:05.294 22:40:03 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721853603 00:01:05.294 22:40:03 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721853603 00:01:05.294 22:40:03 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721853603 00:01:05.294 22:40:03 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721853603 00:01:05.553 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721853603_collect-vmstat.pm.log 00:01:05.553 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721853603_collect-cpu-load.pm.log 00:01:05.553 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721853603_collect-cpu-temp.pm.log 00:01:05.553 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721853603_collect-bmc-pm.bmc.pm.log 00:01:06.489 22:40:04 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:01:06.489 22:40:04 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:06.489 22:40:04 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:06.489 22:40:04 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:01:06.489 22:40:04 -- spdk/autobuild.sh@16 -- $ date -u 00:01:06.489 Wed Jul 24 08:40:04 PM UTC 2024 00:01:06.489 22:40:04 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:06.489 v24.09-pre-317-gf41dbc235 00:01:06.489 22:40:04 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:06.489 22:40:04 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:06.489 22:40:04 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:06.489 22:40:04 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:01:06.489 22:40:04 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:06.489 22:40:04 -- common/autotest_common.sh@10 -- $ set +x 00:01:06.489 ************************************ 00:01:06.489 START TEST ubsan 00:01:06.489 ************************************ 00:01:06.489 22:40:04 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:01:06.489 using ubsan 00:01:06.489 00:01:06.489 real 0m0.000s 00:01:06.489 user 0m0.000s 00:01:06.489 sys 0m0.000s 00:01:06.489 22:40:04 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:01:06.489 22:40:04 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:06.489 ************************************ 00:01:06.489 END TEST ubsan 00:01:06.489 ************************************ 00:01:06.489 22:40:04 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:06.489 22:40:04 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:06.489 22:40:04 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:06.489 22:40:04 -- spdk/autobuild.sh@51 -- $ [[ 1 -eq 1 ]] 00:01:06.489 22:40:04 -- spdk/autobuild.sh@52 -- $ llvm_precompile 00:01:06.489 22:40:04 -- common/autobuild_common.sh@435 -- $ run_test autobuild_llvm_precompile _llvm_precompile 00:01:06.489 22:40:04 -- common/autotest_common.sh@1101 -- $ '[' 2 -le 1 ']' 00:01:06.489 22:40:04 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:06.489 22:40:04 -- common/autotest_common.sh@10 -- $ set +x 00:01:06.489 ************************************ 00:01:06.489 START TEST autobuild_llvm_precompile 00:01:06.489 ************************************ 00:01:06.489 22:40:04 autobuild_llvm_precompile -- common/autotest_common.sh@1125 -- $ _llvm_precompile 00:01:06.489 22:40:04 autobuild_llvm_precompile -- common/autobuild_common.sh@32 -- $ clang --version 00:01:06.489 22:40:04 autobuild_llvm_precompile -- common/autobuild_common.sh@32 -- $ [[ clang version 16.0.6 (Fedora 16.0.6-3.fc38) 00:01:06.489 Target: x86_64-redhat-linux-gnu 00:01:06.489 Thread model: posix 00:01:06.489 InstalledDir: /usr/bin =~ version (([0-9]+).([0-9]+).([0-9]+)) ]] 00:01:06.489 22:40:04 autobuild_llvm_precompile -- common/autobuild_common.sh@33 -- $ clang_num=16 00:01:06.489 22:40:04 autobuild_llvm_precompile -- common/autobuild_common.sh@35 -- $ export CC=clang-16 00:01:06.489 22:40:04 autobuild_llvm_precompile -- common/autobuild_common.sh@35 -- $ CC=clang-16 00:01:06.489 22:40:04 autobuild_llvm_precompile -- common/autobuild_common.sh@36 -- $ export CXX=clang++-16 00:01:06.489 22:40:04 autobuild_llvm_precompile -- common/autobuild_common.sh@36 -- $ CXX=clang++-16 00:01:06.489 22:40:04 autobuild_llvm_precompile -- common/autobuild_common.sh@38 -- $ fuzzer_libs=(/usr/lib*/clang/@("$clang_num"|"$clang_version")/lib/*linux*/libclang_rt.fuzzer_no_main?(-x86_64).a) 00:01:06.489 22:40:04 autobuild_llvm_precompile -- common/autobuild_common.sh@39 -- $ fuzzer_lib=/usr/lib64/clang/16/lib/linux/libclang_rt.fuzzer_no_main-x86_64.a 00:01:06.489 22:40:04 autobuild_llvm_precompile -- common/autobuild_common.sh@40 -- $ [[ -e /usr/lib64/clang/16/lib/linux/libclang_rt.fuzzer_no_main-x86_64.a ]] 00:01:06.489 22:40:04 autobuild_llvm_precompile -- common/autobuild_common.sh@42 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-fuzzer=/usr/lib64/clang/16/lib/linux/libclang_rt.fuzzer_no_main-x86_64.a' 00:01:06.489 22:40:04 autobuild_llvm_precompile -- common/autobuild_common.sh@44 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-fuzzer=/usr/lib64/clang/16/lib/linux/libclang_rt.fuzzer_no_main-x86_64.a 00:01:06.748 Using default SPDK env in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:01:06.748 Using default DPDK in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:01:07.314 Using 'verbs' RDMA provider 00:01:20.461 Configuring ISA-L (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:32.736 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:32.994 Creating mk/config.mk...done. 00:01:32.995 Creating mk/cc.flags.mk...done. 00:01:32.995 Type 'make' to build. 00:01:32.995 00:01:32.995 real 0m26.362s 00:01:32.995 user 0m12.577s 00:01:32.995 sys 0m12.866s 00:01:32.995 22:40:30 autobuild_llvm_precompile -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:01:32.995 22:40:30 autobuild_llvm_precompile -- common/autotest_common.sh@10 -- $ set +x 00:01:32.995 ************************************ 00:01:32.995 END TEST autobuild_llvm_precompile 00:01:32.995 ************************************ 00:01:32.995 22:40:31 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:32.995 22:40:31 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:32.995 22:40:31 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:32.995 22:40:31 -- spdk/autobuild.sh@62 -- $ [[ 1 -eq 1 ]] 00:01:32.995 22:40:31 -- spdk/autobuild.sh@64 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-fuzzer=/usr/lib64/clang/16/lib/linux/libclang_rt.fuzzer_no_main-x86_64.a 00:01:33.253 Using default SPDK env in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:01:33.253 Using default DPDK in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:01:33.512 Using 'verbs' RDMA provider 00:01:46.659 Configuring ISA-L (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:56.630 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:56.630 Creating mk/config.mk...done. 00:01:56.630 Creating mk/cc.flags.mk...done. 00:01:56.630 Type 'make' to build. 00:01:56.630 22:40:53 -- spdk/autobuild.sh@69 -- $ run_test make make -j88 00:01:56.630 22:40:53 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:01:56.630 22:40:53 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:56.630 22:40:53 -- common/autotest_common.sh@10 -- $ set +x 00:01:56.630 ************************************ 00:01:56.630 START TEST make 00:01:56.630 ************************************ 00:01:56.630 22:40:54 make -- common/autotest_common.sh@1125 -- $ make -j88 00:01:56.630 make[1]: Nothing to be done for 'all'. 00:01:58.014 The Meson build system 00:01:58.014 Version: 1.3.1 00:01:58.014 Source dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/libvfio-user 00:01:58.014 Build dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:58.014 Build type: native build 00:01:58.014 Project name: libvfio-user 00:01:58.014 Project version: 0.0.1 00:01:58.014 C compiler for the host machine: clang-16 (clang 16.0.6 "clang version 16.0.6 (Fedora 16.0.6-3.fc38)") 00:01:58.014 C linker for the host machine: clang-16 ld.bfd 2.39-16 00:01:58.014 Host machine cpu family: x86_64 00:01:58.014 Host machine cpu: x86_64 00:01:58.014 Run-time dependency threads found: YES 00:01:58.014 Library dl found: YES 00:01:58.014 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:58.014 Run-time dependency json-c found: YES 0.17 00:01:58.014 Run-time dependency cmocka found: YES 1.1.7 00:01:58.014 Program pytest-3 found: NO 00:01:58.014 Program flake8 found: NO 00:01:58.014 Program misspell-fixer found: NO 00:01:58.014 Program restructuredtext-lint found: NO 00:01:58.014 Program valgrind found: YES (/usr/bin/valgrind) 00:01:58.014 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:58.014 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:58.014 Compiler for C supports arguments -Wwrite-strings: YES 00:01:58.014 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:58.014 Program test-lspci.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:01:58.014 Program test-linkage.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:01:58.014 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:58.014 Build targets in project: 8 00:01:58.014 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:58.014 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:58.014 00:01:58.014 libvfio-user 0.0.1 00:01:58.014 00:01:58.014 User defined options 00:01:58.014 buildtype : debug 00:01:58.014 default_library: static 00:01:58.014 libdir : /usr/local/lib 00:01:58.014 00:01:58.014 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:58.272 ninja: Entering directory `/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:58.272 [1/36] Compiling C object samples/lspci.p/lspci.c.o 00:01:58.272 [2/36] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:58.272 [3/36] Compiling C object samples/null.p/null.c.o 00:01:58.272 [4/36] Compiling C object lib/libvfio-user.a.p/irq.c.o 00:01:58.272 [5/36] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:58.272 [6/36] Compiling C object lib/libvfio-user.a.p/tran.c.o 00:01:58.272 [7/36] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:58.272 [8/36] Compiling C object lib/libvfio-user.a.p/migration.c.o 00:01:58.272 [9/36] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:58.272 [10/36] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:58.272 [11/36] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:58.272 [12/36] Compiling C object lib/libvfio-user.a.p/pci.c.o 00:01:58.272 [13/36] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:58.272 [14/36] Compiling C object lib/libvfio-user.a.p/dma.c.o 00:01:58.272 [15/36] Compiling C object test/unit_tests.p/mocks.c.o 00:01:58.272 [16/36] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:58.272 [17/36] Compiling C object samples/server.p/server.c.o 00:01:58.272 [18/36] Compiling C object lib/libvfio-user.a.p/pci_caps.c.o 00:01:58.272 [19/36] Compiling C object lib/libvfio-user.a.p/tran_sock.c.o 00:01:58.272 [20/36] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:58.272 [21/36] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:58.272 [22/36] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:58.272 [23/36] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:58.272 [24/36] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:58.272 [25/36] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:58.272 [26/36] Compiling C object samples/client.p/client.c.o 00:01:58.272 [27/36] Compiling C object lib/libvfio-user.a.p/libvfio-user.c.o 00:01:58.272 [28/36] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:58.272 [29/36] Linking static target lib/libvfio-user.a 00:01:58.272 [30/36] Linking target samples/client 00:01:58.272 [31/36] Linking target samples/server 00:01:58.272 [32/36] Linking target samples/lspci 00:01:58.272 [33/36] Linking target samples/null 00:01:58.272 [34/36] Linking target samples/shadow_ioeventfd_server 00:01:58.272 [35/36] Linking target test/unit_tests 00:01:58.272 [36/36] Linking target samples/gpio-pci-idio-16 00:01:58.530 INFO: autodetecting backend as ninja 00:01:58.530 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:58.530 DESTDIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:58.788 ninja: Entering directory `/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:58.788 ninja: no work to do. 00:02:04.058 The Meson build system 00:02:04.058 Version: 1.3.1 00:02:04.058 Source dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk 00:02:04.058 Build dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build-tmp 00:02:04.058 Build type: native build 00:02:04.058 Program cat found: YES (/usr/bin/cat) 00:02:04.058 Project name: DPDK 00:02:04.058 Project version: 24.03.0 00:02:04.058 C compiler for the host machine: clang-16 (clang 16.0.6 "clang version 16.0.6 (Fedora 16.0.6-3.fc38)") 00:02:04.058 C linker for the host machine: clang-16 ld.bfd 2.39-16 00:02:04.058 Host machine cpu family: x86_64 00:02:04.058 Host machine cpu: x86_64 00:02:04.058 Message: ## Building in Developer Mode ## 00:02:04.058 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:04.058 Program check-symbols.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:02:04.058 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:04.058 Program python3 found: YES (/usr/bin/python3) 00:02:04.058 Program cat found: YES (/usr/bin/cat) 00:02:04.058 Compiler for C supports arguments -march=native: YES 00:02:04.058 Checking for size of "void *" : 8 00:02:04.058 Checking for size of "void *" : 8 (cached) 00:02:04.058 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:02:04.058 Library m found: YES 00:02:04.058 Library numa found: YES 00:02:04.058 Has header "numaif.h" : YES 00:02:04.058 Library fdt found: NO 00:02:04.058 Library execinfo found: NO 00:02:04.058 Has header "execinfo.h" : YES 00:02:04.058 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:04.058 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:04.058 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:04.058 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:04.058 Run-time dependency openssl found: YES 3.0.9 00:02:04.058 Run-time dependency libpcap found: YES 1.10.4 00:02:04.058 Has header "pcap.h" with dependency libpcap: YES 00:02:04.058 Compiler for C supports arguments -Wcast-qual: YES 00:02:04.058 Compiler for C supports arguments -Wdeprecated: YES 00:02:04.058 Compiler for C supports arguments -Wformat: YES 00:02:04.058 Compiler for C supports arguments -Wformat-nonliteral: YES 00:02:04.058 Compiler for C supports arguments -Wformat-security: YES 00:02:04.058 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:04.058 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:04.058 Compiler for C supports arguments -Wnested-externs: YES 00:02:04.058 Compiler for C supports arguments -Wold-style-definition: YES 00:02:04.058 Compiler for C supports arguments -Wpointer-arith: YES 00:02:04.058 Compiler for C supports arguments -Wsign-compare: YES 00:02:04.058 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:04.058 Compiler for C supports arguments -Wundef: YES 00:02:04.058 Compiler for C supports arguments -Wwrite-strings: YES 00:02:04.058 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:04.058 Compiler for C supports arguments -Wno-packed-not-aligned: NO 00:02:04.058 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:04.058 Program objdump found: YES (/usr/bin/objdump) 00:02:04.058 Compiler for C supports arguments -mavx512f: YES 00:02:04.058 Checking if "AVX512 checking" compiles: YES 00:02:04.058 Fetching value of define "__SSE4_2__" : 1 00:02:04.058 Fetching value of define "__AES__" : 1 00:02:04.058 Fetching value of define "__AVX__" : 1 00:02:04.058 Fetching value of define "__AVX2__" : 1 00:02:04.058 Fetching value of define "__AVX512BW__" : 1 00:02:04.058 Fetching value of define "__AVX512CD__" : 1 00:02:04.058 Fetching value of define "__AVX512DQ__" : 1 00:02:04.058 Fetching value of define "__AVX512F__" : 1 00:02:04.058 Fetching value of define "__AVX512VL__" : 1 00:02:04.058 Fetching value of define "__PCLMUL__" : 1 00:02:04.058 Fetching value of define "__RDRND__" : 1 00:02:04.058 Fetching value of define "__RDSEED__" : 1 00:02:04.058 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:04.058 Fetching value of define "__znver1__" : (undefined) 00:02:04.058 Fetching value of define "__znver2__" : (undefined) 00:02:04.058 Fetching value of define "__znver3__" : (undefined) 00:02:04.058 Fetching value of define "__znver4__" : (undefined) 00:02:04.058 Compiler for C supports arguments -Wno-format-truncation: NO 00:02:04.058 Message: lib/log: Defining dependency "log" 00:02:04.058 Message: lib/kvargs: Defining dependency "kvargs" 00:02:04.058 Message: lib/telemetry: Defining dependency "telemetry" 00:02:04.058 Checking for function "getentropy" : NO 00:02:04.058 Message: lib/eal: Defining dependency "eal" 00:02:04.058 Message: lib/ring: Defining dependency "ring" 00:02:04.058 Message: lib/rcu: Defining dependency "rcu" 00:02:04.058 Message: lib/mempool: Defining dependency "mempool" 00:02:04.058 Message: lib/mbuf: Defining dependency "mbuf" 00:02:04.058 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:04.058 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:04.058 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:04.058 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:04.058 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:04.058 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:02:04.058 Compiler for C supports arguments -mpclmul: YES 00:02:04.058 Compiler for C supports arguments -maes: YES 00:02:04.058 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:04.058 Compiler for C supports arguments -mavx512bw: YES 00:02:04.058 Compiler for C supports arguments -mavx512dq: YES 00:02:04.058 Compiler for C supports arguments -mavx512vl: YES 00:02:04.058 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:04.058 Compiler for C supports arguments -mavx2: YES 00:02:04.058 Compiler for C supports arguments -mavx: YES 00:02:04.058 Message: lib/net: Defining dependency "net" 00:02:04.058 Message: lib/meter: Defining dependency "meter" 00:02:04.058 Message: lib/ethdev: Defining dependency "ethdev" 00:02:04.058 Message: lib/pci: Defining dependency "pci" 00:02:04.058 Message: lib/cmdline: Defining dependency "cmdline" 00:02:04.058 Message: lib/hash: Defining dependency "hash" 00:02:04.058 Message: lib/timer: Defining dependency "timer" 00:02:04.058 Message: lib/compressdev: Defining dependency "compressdev" 00:02:04.058 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:04.058 Message: lib/dmadev: Defining dependency "dmadev" 00:02:04.058 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:04.058 Message: lib/power: Defining dependency "power" 00:02:04.058 Message: lib/reorder: Defining dependency "reorder" 00:02:04.058 Message: lib/security: Defining dependency "security" 00:02:04.058 Has header "linux/userfaultfd.h" : YES 00:02:04.058 Has header "linux/vduse.h" : YES 00:02:04.059 Message: lib/vhost: Defining dependency "vhost" 00:02:04.059 Compiler for C supports arguments -Wno-format-truncation: NO (cached) 00:02:04.059 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:04.059 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:04.059 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:04.059 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:04.059 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:04.059 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:04.059 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:04.059 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:04.059 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:04.059 Program doxygen found: YES (/usr/bin/doxygen) 00:02:04.059 Configuring doxy-api-html.conf using configuration 00:02:04.059 Configuring doxy-api-man.conf using configuration 00:02:04.059 Program mandb found: YES (/usr/bin/mandb) 00:02:04.059 Program sphinx-build found: NO 00:02:04.059 Configuring rte_build_config.h using configuration 00:02:04.059 Message: 00:02:04.059 ================= 00:02:04.059 Applications Enabled 00:02:04.059 ================= 00:02:04.059 00:02:04.059 apps: 00:02:04.059 00:02:04.059 00:02:04.059 Message: 00:02:04.059 ================= 00:02:04.059 Libraries Enabled 00:02:04.059 ================= 00:02:04.059 00:02:04.059 libs: 00:02:04.059 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:04.059 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:04.059 cryptodev, dmadev, power, reorder, security, vhost, 00:02:04.059 00:02:04.059 Message: 00:02:04.059 =============== 00:02:04.059 Drivers Enabled 00:02:04.059 =============== 00:02:04.059 00:02:04.059 common: 00:02:04.059 00:02:04.059 bus: 00:02:04.059 pci, vdev, 00:02:04.059 mempool: 00:02:04.059 ring, 00:02:04.059 dma: 00:02:04.059 00:02:04.059 net: 00:02:04.059 00:02:04.059 crypto: 00:02:04.059 00:02:04.059 compress: 00:02:04.059 00:02:04.059 vdpa: 00:02:04.059 00:02:04.059 00:02:04.059 Message: 00:02:04.059 ================= 00:02:04.059 Content Skipped 00:02:04.059 ================= 00:02:04.059 00:02:04.059 apps: 00:02:04.059 dumpcap: explicitly disabled via build config 00:02:04.059 graph: explicitly disabled via build config 00:02:04.059 pdump: explicitly disabled via build config 00:02:04.059 proc-info: explicitly disabled via build config 00:02:04.059 test-acl: explicitly disabled via build config 00:02:04.059 test-bbdev: explicitly disabled via build config 00:02:04.059 test-cmdline: explicitly disabled via build config 00:02:04.059 test-compress-perf: explicitly disabled via build config 00:02:04.059 test-crypto-perf: explicitly disabled via build config 00:02:04.059 test-dma-perf: explicitly disabled via build config 00:02:04.059 test-eventdev: explicitly disabled via build config 00:02:04.059 test-fib: explicitly disabled via build config 00:02:04.059 test-flow-perf: explicitly disabled via build config 00:02:04.059 test-gpudev: explicitly disabled via build config 00:02:04.059 test-mldev: explicitly disabled via build config 00:02:04.059 test-pipeline: explicitly disabled via build config 00:02:04.059 test-pmd: explicitly disabled via build config 00:02:04.059 test-regex: explicitly disabled via build config 00:02:04.059 test-sad: explicitly disabled via build config 00:02:04.059 test-security-perf: explicitly disabled via build config 00:02:04.059 00:02:04.059 libs: 00:02:04.059 argparse: explicitly disabled via build config 00:02:04.059 metrics: explicitly disabled via build config 00:02:04.059 acl: explicitly disabled via build config 00:02:04.059 bbdev: explicitly disabled via build config 00:02:04.059 bitratestats: explicitly disabled via build config 00:02:04.059 bpf: explicitly disabled via build config 00:02:04.059 cfgfile: explicitly disabled via build config 00:02:04.059 distributor: explicitly disabled via build config 00:02:04.059 efd: explicitly disabled via build config 00:02:04.059 eventdev: explicitly disabled via build config 00:02:04.059 dispatcher: explicitly disabled via build config 00:02:04.059 gpudev: explicitly disabled via build config 00:02:04.059 gro: explicitly disabled via build config 00:02:04.059 gso: explicitly disabled via build config 00:02:04.059 ip_frag: explicitly disabled via build config 00:02:04.059 jobstats: explicitly disabled via build config 00:02:04.059 latencystats: explicitly disabled via build config 00:02:04.059 lpm: explicitly disabled via build config 00:02:04.059 member: explicitly disabled via build config 00:02:04.059 pcapng: explicitly disabled via build config 00:02:04.059 rawdev: explicitly disabled via build config 00:02:04.059 regexdev: explicitly disabled via build config 00:02:04.059 mldev: explicitly disabled via build config 00:02:04.059 rib: explicitly disabled via build config 00:02:04.059 sched: explicitly disabled via build config 00:02:04.059 stack: explicitly disabled via build config 00:02:04.059 ipsec: explicitly disabled via build config 00:02:04.059 pdcp: explicitly disabled via build config 00:02:04.059 fib: explicitly disabled via build config 00:02:04.059 port: explicitly disabled via build config 00:02:04.059 pdump: explicitly disabled via build config 00:02:04.059 table: explicitly disabled via build config 00:02:04.059 pipeline: explicitly disabled via build config 00:02:04.059 graph: explicitly disabled via build config 00:02:04.059 node: explicitly disabled via build config 00:02:04.059 00:02:04.059 drivers: 00:02:04.059 common/cpt: not in enabled drivers build config 00:02:04.059 common/dpaax: not in enabled drivers build config 00:02:04.059 common/iavf: not in enabled drivers build config 00:02:04.059 common/idpf: not in enabled drivers build config 00:02:04.059 common/ionic: not in enabled drivers build config 00:02:04.059 common/mvep: not in enabled drivers build config 00:02:04.059 common/octeontx: not in enabled drivers build config 00:02:04.059 bus/auxiliary: not in enabled drivers build config 00:02:04.059 bus/cdx: not in enabled drivers build config 00:02:04.059 bus/dpaa: not in enabled drivers build config 00:02:04.059 bus/fslmc: not in enabled drivers build config 00:02:04.059 bus/ifpga: not in enabled drivers build config 00:02:04.059 bus/platform: not in enabled drivers build config 00:02:04.059 bus/uacce: not in enabled drivers build config 00:02:04.059 bus/vmbus: not in enabled drivers build config 00:02:04.059 common/cnxk: not in enabled drivers build config 00:02:04.059 common/mlx5: not in enabled drivers build config 00:02:04.059 common/nfp: not in enabled drivers build config 00:02:04.059 common/nitrox: not in enabled drivers build config 00:02:04.059 common/qat: not in enabled drivers build config 00:02:04.059 common/sfc_efx: not in enabled drivers build config 00:02:04.059 mempool/bucket: not in enabled drivers build config 00:02:04.059 mempool/cnxk: not in enabled drivers build config 00:02:04.059 mempool/dpaa: not in enabled drivers build config 00:02:04.059 mempool/dpaa2: not in enabled drivers build config 00:02:04.059 mempool/octeontx: not in enabled drivers build config 00:02:04.059 mempool/stack: not in enabled drivers build config 00:02:04.059 dma/cnxk: not in enabled drivers build config 00:02:04.059 dma/dpaa: not in enabled drivers build config 00:02:04.059 dma/dpaa2: not in enabled drivers build config 00:02:04.059 dma/hisilicon: not in enabled drivers build config 00:02:04.059 dma/idxd: not in enabled drivers build config 00:02:04.059 dma/ioat: not in enabled drivers build config 00:02:04.059 dma/skeleton: not in enabled drivers build config 00:02:04.059 net/af_packet: not in enabled drivers build config 00:02:04.059 net/af_xdp: not in enabled drivers build config 00:02:04.059 net/ark: not in enabled drivers build config 00:02:04.059 net/atlantic: not in enabled drivers build config 00:02:04.059 net/avp: not in enabled drivers build config 00:02:04.059 net/axgbe: not in enabled drivers build config 00:02:04.059 net/bnx2x: not in enabled drivers build config 00:02:04.059 net/bnxt: not in enabled drivers build config 00:02:04.059 net/bonding: not in enabled drivers build config 00:02:04.059 net/cnxk: not in enabled drivers build config 00:02:04.059 net/cpfl: not in enabled drivers build config 00:02:04.059 net/cxgbe: not in enabled drivers build config 00:02:04.059 net/dpaa: not in enabled drivers build config 00:02:04.059 net/dpaa2: not in enabled drivers build config 00:02:04.059 net/e1000: not in enabled drivers build config 00:02:04.059 net/ena: not in enabled drivers build config 00:02:04.059 net/enetc: not in enabled drivers build config 00:02:04.059 net/enetfec: not in enabled drivers build config 00:02:04.059 net/enic: not in enabled drivers build config 00:02:04.059 net/failsafe: not in enabled drivers build config 00:02:04.059 net/fm10k: not in enabled drivers build config 00:02:04.059 net/gve: not in enabled drivers build config 00:02:04.059 net/hinic: not in enabled drivers build config 00:02:04.059 net/hns3: not in enabled drivers build config 00:02:04.059 net/i40e: not in enabled drivers build config 00:02:04.059 net/iavf: not in enabled drivers build config 00:02:04.059 net/ice: not in enabled drivers build config 00:02:04.059 net/idpf: not in enabled drivers build config 00:02:04.059 net/igc: not in enabled drivers build config 00:02:04.059 net/ionic: not in enabled drivers build config 00:02:04.059 net/ipn3ke: not in enabled drivers build config 00:02:04.059 net/ixgbe: not in enabled drivers build config 00:02:04.059 net/mana: not in enabled drivers build config 00:02:04.059 net/memif: not in enabled drivers build config 00:02:04.059 net/mlx4: not in enabled drivers build config 00:02:04.059 net/mlx5: not in enabled drivers build config 00:02:04.059 net/mvneta: not in enabled drivers build config 00:02:04.059 net/mvpp2: not in enabled drivers build config 00:02:04.059 net/netvsc: not in enabled drivers build config 00:02:04.059 net/nfb: not in enabled drivers build config 00:02:04.059 net/nfp: not in enabled drivers build config 00:02:04.060 net/ngbe: not in enabled drivers build config 00:02:04.060 net/null: not in enabled drivers build config 00:02:04.060 net/octeontx: not in enabled drivers build config 00:02:04.060 net/octeon_ep: not in enabled drivers build config 00:02:04.060 net/pcap: not in enabled drivers build config 00:02:04.060 net/pfe: not in enabled drivers build config 00:02:04.060 net/qede: not in enabled drivers build config 00:02:04.060 net/ring: not in enabled drivers build config 00:02:04.060 net/sfc: not in enabled drivers build config 00:02:04.060 net/softnic: not in enabled drivers build config 00:02:04.060 net/tap: not in enabled drivers build config 00:02:04.060 net/thunderx: not in enabled drivers build config 00:02:04.060 net/txgbe: not in enabled drivers build config 00:02:04.060 net/vdev_netvsc: not in enabled drivers build config 00:02:04.060 net/vhost: not in enabled drivers build config 00:02:04.060 net/virtio: not in enabled drivers build config 00:02:04.060 net/vmxnet3: not in enabled drivers build config 00:02:04.060 raw/*: missing internal dependency, "rawdev" 00:02:04.060 crypto/armv8: not in enabled drivers build config 00:02:04.060 crypto/bcmfs: not in enabled drivers build config 00:02:04.060 crypto/caam_jr: not in enabled drivers build config 00:02:04.060 crypto/ccp: not in enabled drivers build config 00:02:04.060 crypto/cnxk: not in enabled drivers build config 00:02:04.060 crypto/dpaa_sec: not in enabled drivers build config 00:02:04.060 crypto/dpaa2_sec: not in enabled drivers build config 00:02:04.060 crypto/ipsec_mb: not in enabled drivers build config 00:02:04.060 crypto/mlx5: not in enabled drivers build config 00:02:04.060 crypto/mvsam: not in enabled drivers build config 00:02:04.060 crypto/nitrox: not in enabled drivers build config 00:02:04.060 crypto/null: not in enabled drivers build config 00:02:04.060 crypto/octeontx: not in enabled drivers build config 00:02:04.060 crypto/openssl: not in enabled drivers build config 00:02:04.060 crypto/scheduler: not in enabled drivers build config 00:02:04.060 crypto/uadk: not in enabled drivers build config 00:02:04.060 crypto/virtio: not in enabled drivers build config 00:02:04.060 compress/isal: not in enabled drivers build config 00:02:04.060 compress/mlx5: not in enabled drivers build config 00:02:04.060 compress/nitrox: not in enabled drivers build config 00:02:04.060 compress/octeontx: not in enabled drivers build config 00:02:04.060 compress/zlib: not in enabled drivers build config 00:02:04.060 regex/*: missing internal dependency, "regexdev" 00:02:04.060 ml/*: missing internal dependency, "mldev" 00:02:04.060 vdpa/ifc: not in enabled drivers build config 00:02:04.060 vdpa/mlx5: not in enabled drivers build config 00:02:04.060 vdpa/nfp: not in enabled drivers build config 00:02:04.060 vdpa/sfc: not in enabled drivers build config 00:02:04.060 event/*: missing internal dependency, "eventdev" 00:02:04.060 baseband/*: missing internal dependency, "bbdev" 00:02:04.060 gpu/*: missing internal dependency, "gpudev" 00:02:04.060 00:02:04.060 00:02:04.060 Build targets in project: 85 00:02:04.060 00:02:04.060 DPDK 24.03.0 00:02:04.060 00:02:04.060 User defined options 00:02:04.060 buildtype : debug 00:02:04.060 default_library : static 00:02:04.060 libdir : lib 00:02:04.060 prefix : /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:02:04.060 c_args : -fPIC -Werror 00:02:04.060 c_link_args : 00:02:04.060 cpu_instruction_set: native 00:02:04.060 disable_apps : test-sad,test-acl,test-dma-perf,test-pipeline,test-compress-perf,test-fib,test-flow-perf,test-crypto-perf,test-bbdev,test-eventdev,pdump,test-mldev,test-cmdline,graph,test-security-perf,test-pmd,test,proc-info,test-regex,dumpcap,test-gpudev 00:02:04.060 disable_libs : port,sched,rib,node,ipsec,distributor,gro,eventdev,pdcp,acl,member,latencystats,efd,stack,regexdev,rawdev,bpf,metrics,gpudev,pipeline,pdump,table,fib,dispatcher,mldev,gso,cfgfile,bitratestats,ip_frag,graph,lpm,jobstats,argparse,pcapng,bbdev 00:02:04.060 enable_docs : false 00:02:04.060 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:04.060 enable_kmods : false 00:02:04.060 max_lcores : 128 00:02:04.060 tests : false 00:02:04.060 00:02:04.060 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:04.319 ninja: Entering directory `/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build-tmp' 00:02:04.584 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:04.584 [2/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:04.584 [3/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:04.584 [4/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:04.584 [5/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:04.584 [6/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:04.584 [7/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:04.584 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:04.584 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:04.584 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:04.584 [11/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:04.584 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:04.584 [13/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:04.584 [14/268] Linking static target lib/librte_kvargs.a 00:02:04.584 [15/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:04.584 [16/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:04.584 [17/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:04.584 [18/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:04.584 [19/268] Linking static target lib/librte_log.a 00:02:05.153 [20/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.153 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:05.153 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:05.153 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:05.153 [24/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:05.153 [25/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:05.153 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:05.153 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:05.153 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:05.153 [29/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:05.153 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:05.153 [31/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:05.153 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:05.153 [33/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:05.153 [34/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:05.153 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:05.153 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:05.153 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:05.153 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:05.153 [39/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:05.153 [40/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:05.153 [41/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:05.153 [42/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:05.153 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:05.153 [44/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:05.153 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:05.153 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:05.153 [47/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:05.153 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:05.153 [49/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:05.153 [50/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:05.154 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:05.154 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:05.154 [53/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:05.154 [54/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:05.154 [55/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:05.154 [56/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:05.154 [57/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:05.154 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:05.154 [59/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:05.154 [60/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:05.154 [61/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:05.154 [62/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:05.154 [63/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:05.154 [64/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:05.154 [65/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:05.154 [66/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:05.154 [67/268] Linking static target lib/librte_telemetry.a 00:02:05.154 [68/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:05.154 [69/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:05.154 [70/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:05.154 [71/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:05.154 [72/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:05.154 [73/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:05.154 [74/268] Linking static target lib/librte_pci.a 00:02:05.154 [75/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:05.154 [76/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:05.154 [77/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:05.154 [78/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:05.154 [79/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:05.154 [80/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:05.154 [81/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:05.154 [82/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:05.154 [83/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.154 [84/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:05.154 [85/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:05.154 [86/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:05.154 [87/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:05.154 [88/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:05.154 [89/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:05.154 [90/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:05.154 [91/268] Linking static target lib/librte_ring.a 00:02:05.154 [92/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:05.154 [93/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:05.154 [94/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:05.154 [95/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:05.154 [96/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:05.154 [97/268] Linking static target lib/librte_meter.a 00:02:05.154 [98/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:05.154 [99/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:05.154 [100/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:05.154 [101/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:05.154 [102/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:05.154 [103/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:05.154 [104/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:05.154 [105/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:05.154 [106/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:05.154 [107/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:05.154 [108/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:05.154 [109/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:05.154 [110/268] Linking target lib/librte_log.so.24.1 00:02:05.154 [111/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:05.154 [112/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:05.154 [113/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:05.154 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:05.154 [115/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:05.154 [116/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:05.154 [117/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:05.154 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:05.154 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:05.154 [120/268] Linking static target lib/librte_net.a 00:02:05.154 [121/268] Linking static target lib/librte_eal.a 00:02:05.154 [122/268] Linking static target lib/librte_rcu.a 00:02:05.411 [123/268] Linking static target lib/librte_mempool.a 00:02:05.411 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:05.411 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:05.411 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:05.411 [127/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.411 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:05.411 [129/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:05.411 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:05.411 [131/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:05.411 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:05.411 [133/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.411 [134/268] Linking static target lib/librte_mbuf.a 00:02:05.411 [135/268] Linking target lib/librte_kvargs.so.24.1 00:02:05.411 [136/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.411 [137/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:05.411 [138/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:05.411 [139/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.411 [140/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:05.411 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:05.411 [142/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:05.411 [143/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:05.411 [144/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.669 [145/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.669 [146/268] Linking static target lib/librte_timer.a 00:02:05.669 [147/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:05.669 [148/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:05.669 [149/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:05.669 [150/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:05.669 [151/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:05.669 [152/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:05.669 [153/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:05.669 [154/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:05.669 [155/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:05.669 [156/268] Linking target lib/librte_telemetry.so.24.1 00:02:05.669 [157/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:05.669 [158/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:05.669 [159/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:05.669 [160/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:05.669 [161/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:05.669 [162/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:05.669 [163/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:05.669 [164/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:05.669 [165/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:05.669 [166/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:05.669 [167/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:05.669 [168/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:05.669 [169/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:05.669 [170/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:05.669 [171/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:05.669 [172/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:05.669 [173/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:05.669 [174/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:05.669 [175/268] Linking static target lib/librte_cmdline.a 00:02:05.669 [176/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:05.669 [177/268] Linking static target lib/librte_hash.a 00:02:05.669 [178/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:05.669 [179/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:05.669 [180/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:05.669 [181/268] Linking static target lib/librte_dmadev.a 00:02:05.669 [182/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:05.669 [183/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:05.669 [184/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:05.669 [185/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:05.927 [186/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:05.927 [187/268] Linking static target lib/librte_compressdev.a 00:02:05.927 [188/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:05.927 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:05.927 [190/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:05.927 [191/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:05.927 [192/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:05.927 [193/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:05.927 [194/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:05.927 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:05.927 [196/268] Linking static target lib/librte_reorder.a 00:02:05.927 [197/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:05.927 [198/268] Linking static target lib/librte_power.a 00:02:05.927 [199/268] Linking static target drivers/librte_bus_vdev.a 00:02:05.927 [200/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:05.927 [201/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:05.927 [202/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:05.927 [203/268] Linking static target lib/librte_security.a 00:02:05.927 [204/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:05.927 [205/268] Linking static target drivers/librte_mempool_ring.a 00:02:05.927 [206/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.927 [207/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:05.927 [208/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:05.927 [209/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.927 [210/268] Linking static target lib/librte_ethdev.a 00:02:05.927 [211/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:05.927 [212/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.927 [213/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:05.927 [214/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:05.927 [215/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:06.185 [216/268] Linking static target drivers/librte_bus_pci.a 00:02:06.185 [217/268] Linking static target lib/librte_cryptodev.a 00:02:06.185 [218/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.185 [219/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:06.185 [220/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.443 [221/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.443 [222/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.443 [223/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.702 [224/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.702 [225/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.702 [226/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:06.702 [227/268] Linking static target lib/librte_vhost.a 00:02:06.702 [228/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.961 [229/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.338 [230/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.597 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.164 [232/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.422 [233/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.422 [234/268] Linking target lib/librte_eal.so.24.1 00:02:15.680 [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:15.680 [236/268] Linking target lib/librte_timer.so.24.1 00:02:15.680 [237/268] Linking target lib/librte_dmadev.so.24.1 00:02:15.680 [238/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:15.680 [239/268] Linking target lib/librte_ring.so.24.1 00:02:15.680 [240/268] Linking target lib/librte_pci.so.24.1 00:02:15.680 [241/268] Linking target lib/librte_meter.so.24.1 00:02:15.680 [242/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:15.680 [243/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:15.680 [244/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:15.680 [245/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:15.680 [246/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:15.938 [247/268] Linking target lib/librte_rcu.so.24.1 00:02:15.938 [248/268] Linking target lib/librte_mempool.so.24.1 00:02:15.938 [249/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:15.938 [250/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:15.938 [251/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:15.938 [252/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:15.938 [253/268] Linking target lib/librte_mbuf.so.24.1 00:02:16.196 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:16.196 [255/268] Linking target lib/librte_compressdev.so.24.1 00:02:16.196 [256/268] Linking target lib/librte_net.so.24.1 00:02:16.196 [257/268] Linking target lib/librte_reorder.so.24.1 00:02:16.196 [258/268] Linking target lib/librte_cryptodev.so.24.1 00:02:16.454 [259/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:16.454 [260/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:16.454 [261/268] Linking target lib/librte_hash.so.24.1 00:02:16.454 [262/268] Linking target lib/librte_security.so.24.1 00:02:16.454 [263/268] Linking target lib/librte_cmdline.so.24.1 00:02:16.454 [264/268] Linking target lib/librte_ethdev.so.24.1 00:02:16.454 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:16.454 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:16.712 [267/268] Linking target lib/librte_power.so.24.1 00:02:16.712 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:16.712 INFO: autodetecting backend as ninja 00:02:16.712 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build-tmp -j 88 00:02:17.648 CC lib/log/log.o 00:02:17.648 CC lib/log/log_flags.o 00:02:17.648 CC lib/log/log_deprecated.o 00:02:17.648 CC lib/ut_mock/mock.o 00:02:17.648 CC lib/ut/ut.o 00:02:17.648 LIB libspdk_log.a 00:02:17.648 LIB libspdk_ut_mock.a 00:02:17.648 LIB libspdk_ut.a 00:02:17.905 CC lib/ioat/ioat.o 00:02:17.905 CC lib/util/base64.o 00:02:17.905 CC lib/util/bit_array.o 00:02:17.905 CC lib/dma/dma.o 00:02:17.905 CC lib/util/cpuset.o 00:02:17.905 CC lib/util/crc16.o 00:02:17.905 CC lib/util/crc32.o 00:02:17.905 CC lib/util/crc32c.o 00:02:17.905 CC lib/util/crc32_ieee.o 00:02:17.905 CC lib/util/crc64.o 00:02:17.905 CC lib/util/dif.o 00:02:17.905 CC lib/util/fd.o 00:02:17.905 CC lib/util/fd_group.o 00:02:17.905 CXX lib/trace_parser/trace.o 00:02:17.905 CC lib/util/file.o 00:02:17.905 CC lib/util/hexlify.o 00:02:17.905 CC lib/util/iov.o 00:02:17.905 CC lib/util/math.o 00:02:17.905 CC lib/util/net.o 00:02:17.905 CC lib/util/pipe.o 00:02:17.905 CC lib/util/strerror_tls.o 00:02:17.905 CC lib/util/string.o 00:02:17.905 CC lib/util/uuid.o 00:02:17.905 CC lib/util/xor.o 00:02:17.905 CC lib/util/zipf.o 00:02:17.905 CC lib/vfio_user/host/vfio_user_pci.o 00:02:17.905 CC lib/vfio_user/host/vfio_user.o 00:02:18.163 LIB libspdk_dma.a 00:02:18.163 LIB libspdk_ioat.a 00:02:18.163 LIB libspdk_vfio_user.a 00:02:18.163 LIB libspdk_util.a 00:02:18.421 LIB libspdk_trace_parser.a 00:02:18.421 CC lib/vmd/vmd.o 00:02:18.421 CC lib/vmd/led.o 00:02:18.421 CC lib/json/json_util.o 00:02:18.421 CC lib/json/json_parse.o 00:02:18.421 CC lib/json/json_write.o 00:02:18.421 CC lib/env_dpdk/env.o 00:02:18.421 CC lib/rdma_provider/common.o 00:02:18.421 CC lib/env_dpdk/memory.o 00:02:18.421 CC lib/env_dpdk/pci.o 00:02:18.421 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:18.421 CC lib/env_dpdk/init.o 00:02:18.421 CC lib/env_dpdk/threads.o 00:02:18.421 CC lib/env_dpdk/pci_ioat.o 00:02:18.421 CC lib/env_dpdk/pci_virtio.o 00:02:18.421 CC lib/env_dpdk/pci_vmd.o 00:02:18.421 CC lib/idxd/idxd.o 00:02:18.421 CC lib/env_dpdk/pci_idxd.o 00:02:18.421 CC lib/conf/conf.o 00:02:18.421 CC lib/rdma_utils/rdma_utils.o 00:02:18.421 CC lib/idxd/idxd_user.o 00:02:18.421 CC lib/env_dpdk/pci_event.o 00:02:18.421 CC lib/idxd/idxd_kernel.o 00:02:18.421 CC lib/env_dpdk/sigbus_handler.o 00:02:18.421 CC lib/env_dpdk/pci_dpdk.o 00:02:18.421 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:18.421 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:18.680 LIB libspdk_rdma_provider.a 00:02:18.680 LIB libspdk_conf.a 00:02:18.680 LIB libspdk_rdma_utils.a 00:02:18.680 LIB libspdk_json.a 00:02:18.938 LIB libspdk_idxd.a 00:02:18.938 LIB libspdk_vmd.a 00:02:18.938 CC lib/jsonrpc/jsonrpc_server.o 00:02:18.938 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:18.938 CC lib/jsonrpc/jsonrpc_client.o 00:02:18.938 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:19.196 LIB libspdk_jsonrpc.a 00:02:19.455 CC lib/rpc/rpc.o 00:02:19.455 LIB libspdk_env_dpdk.a 00:02:19.455 LIB libspdk_rpc.a 00:02:20.022 CC lib/trace/trace.o 00:02:20.022 CC lib/trace/trace_flags.o 00:02:20.022 CC lib/notify/notify.o 00:02:20.022 CC lib/trace/trace_rpc.o 00:02:20.022 CC lib/notify/notify_rpc.o 00:02:20.022 CC lib/keyring/keyring.o 00:02:20.022 CC lib/keyring/keyring_rpc.o 00:02:20.022 LIB libspdk_notify.a 00:02:20.022 LIB libspdk_trace.a 00:02:20.022 LIB libspdk_keyring.a 00:02:20.279 CC lib/thread/thread.o 00:02:20.279 CC lib/thread/iobuf.o 00:02:20.279 CC lib/sock/sock.o 00:02:20.279 CC lib/sock/sock_rpc.o 00:02:20.537 LIB libspdk_sock.a 00:02:20.795 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:20.795 CC lib/nvme/nvme_ctrlr.o 00:02:20.795 CC lib/nvme/nvme_fabric.o 00:02:20.795 CC lib/nvme/nvme_ns_cmd.o 00:02:20.795 CC lib/nvme/nvme_ns.o 00:02:20.795 CC lib/nvme/nvme_pcie_common.o 00:02:20.795 CC lib/nvme/nvme_pcie.o 00:02:20.795 CC lib/nvme/nvme_qpair.o 00:02:20.795 CC lib/nvme/nvme.o 00:02:20.795 CC lib/nvme/nvme_quirks.o 00:02:20.795 CC lib/nvme/nvme_transport.o 00:02:20.795 CC lib/nvme/nvme_discovery.o 00:02:20.795 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:20.795 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:20.795 CC lib/nvme/nvme_tcp.o 00:02:20.795 CC lib/nvme/nvme_opal.o 00:02:20.795 CC lib/nvme/nvme_io_msg.o 00:02:20.795 CC lib/nvme/nvme_poll_group.o 00:02:20.795 CC lib/nvme/nvme_zns.o 00:02:20.795 CC lib/nvme/nvme_stubs.o 00:02:20.795 CC lib/nvme/nvme_auth.o 00:02:20.795 CC lib/nvme/nvme_cuse.o 00:02:20.795 CC lib/nvme/nvme_vfio_user.o 00:02:20.795 CC lib/nvme/nvme_rdma.o 00:02:21.053 LIB libspdk_thread.a 00:02:21.310 CC lib/init/json_config.o 00:02:21.310 CC lib/init/subsystem.o 00:02:21.310 CC lib/init/subsystem_rpc.o 00:02:21.310 CC lib/init/rpc.o 00:02:21.310 CC lib/accel/accel_rpc.o 00:02:21.310 CC lib/accel/accel.o 00:02:21.310 CC lib/vfu_tgt/tgt_endpoint.o 00:02:21.310 CC lib/accel/accel_sw.o 00:02:21.310 CC lib/blob/blobstore.o 00:02:21.311 CC lib/vfu_tgt/tgt_rpc.o 00:02:21.311 CC lib/virtio/virtio.o 00:02:21.311 CC lib/blob/request.o 00:02:21.311 CC lib/blob/zeroes.o 00:02:21.311 CC lib/virtio/virtio_vhost_user.o 00:02:21.311 CC lib/blob/blob_bs_dev.o 00:02:21.311 CC lib/virtio/virtio_vfio_user.o 00:02:21.311 CC lib/virtio/virtio_pci.o 00:02:21.568 LIB libspdk_init.a 00:02:21.568 LIB libspdk_virtio.a 00:02:21.568 LIB libspdk_vfu_tgt.a 00:02:21.827 CC lib/event/app.o 00:02:21.827 CC lib/event/reactor.o 00:02:21.827 CC lib/event/log_rpc.o 00:02:21.827 CC lib/event/app_rpc.o 00:02:21.827 CC lib/event/scheduler_static.o 00:02:22.084 LIB libspdk_event.a 00:02:22.084 LIB libspdk_accel.a 00:02:22.341 LIB libspdk_nvme.a 00:02:22.341 CC lib/bdev/bdev.o 00:02:22.341 CC lib/bdev/bdev_rpc.o 00:02:22.341 CC lib/bdev/bdev_zone.o 00:02:22.341 CC lib/bdev/part.o 00:02:22.341 CC lib/bdev/scsi_nvme.o 00:02:23.274 LIB libspdk_blob.a 00:02:23.532 CC lib/lvol/lvol.o 00:02:23.532 CC lib/blobfs/blobfs.o 00:02:23.532 CC lib/blobfs/tree.o 00:02:23.791 LIB libspdk_lvol.a 00:02:24.050 LIB libspdk_blobfs.a 00:02:24.050 LIB libspdk_bdev.a 00:02:24.307 CC lib/ftl/ftl_core.o 00:02:24.307 CC lib/ftl/ftl_init.o 00:02:24.307 CC lib/nvmf/ctrlr.o 00:02:24.307 CC lib/ftl/ftl_layout.o 00:02:24.307 CC lib/ftl/ftl_debug.o 00:02:24.307 CC lib/nvmf/ctrlr_discovery.o 00:02:24.307 CC lib/ftl/ftl_io.o 00:02:24.307 CC lib/ftl/ftl_sb.o 00:02:24.307 CC lib/nvmf/ctrlr_bdev.o 00:02:24.307 CC lib/ftl/ftl_l2p.o 00:02:24.307 CC lib/nvmf/subsystem.o 00:02:24.307 CC lib/nvmf/nvmf.o 00:02:24.307 CC lib/ftl/ftl_l2p_flat.o 00:02:24.307 CC lib/nvmf/nvmf_rpc.o 00:02:24.307 CC lib/ftl/ftl_nv_cache.o 00:02:24.307 CC lib/nvmf/transport.o 00:02:24.307 CC lib/ftl/ftl_band.o 00:02:24.307 CC lib/nvmf/tcp.o 00:02:24.307 CC lib/ftl/ftl_band_ops.o 00:02:24.307 CC lib/scsi/dev.o 00:02:24.307 CC lib/nvmf/stubs.o 00:02:24.307 CC lib/ftl/ftl_writer.o 00:02:24.307 CC lib/nvmf/mdns_server.o 00:02:24.307 CC lib/ftl/ftl_rq.o 00:02:24.307 CC lib/scsi/lun.o 00:02:24.307 CC lib/nbd/nbd.o 00:02:24.307 CC lib/scsi/port.o 00:02:24.307 CC lib/nvmf/vfio_user.o 00:02:24.307 CC lib/ftl/ftl_reloc.o 00:02:24.307 CC lib/nbd/nbd_rpc.o 00:02:24.307 CC lib/nvmf/rdma.o 00:02:24.307 CC lib/scsi/scsi.o 00:02:24.307 CC lib/ftl/ftl_l2p_cache.o 00:02:24.307 CC lib/nvmf/auth.o 00:02:24.307 CC lib/scsi/scsi_bdev.o 00:02:24.307 CC lib/ftl/ftl_p2l.o 00:02:24.307 CC lib/ublk/ublk.o 00:02:24.307 CC lib/ftl/mngt/ftl_mngt.o 00:02:24.307 CC lib/scsi/scsi_pr.o 00:02:24.307 CC lib/scsi/scsi_rpc.o 00:02:24.308 CC lib/ublk/ublk_rpc.o 00:02:24.308 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:24.308 CC lib/scsi/task.o 00:02:24.308 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:24.308 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:24.308 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:24.308 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:24.308 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:24.308 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:24.308 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:24.308 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:24.308 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:24.308 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:24.308 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:24.308 CC lib/ftl/utils/ftl_conf.o 00:02:24.308 CC lib/ftl/utils/ftl_mempool.o 00:02:24.308 CC lib/ftl/utils/ftl_md.o 00:02:24.308 CC lib/ftl/utils/ftl_bitmap.o 00:02:24.308 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:24.308 CC lib/ftl/utils/ftl_property.o 00:02:24.308 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:24.308 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:24.308 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:24.308 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:24.308 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:24.308 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:24.308 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:24.308 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:24.308 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:24.308 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:24.308 CC lib/ftl/base/ftl_base_dev.o 00:02:24.308 CC lib/ftl/base/ftl_base_bdev.o 00:02:24.308 CC lib/ftl/ftl_trace.o 00:02:24.873 LIB libspdk_scsi.a 00:02:24.873 LIB libspdk_nbd.a 00:02:24.873 LIB libspdk_ublk.a 00:02:25.132 CC lib/iscsi/conn.o 00:02:25.132 CC lib/iscsi/init_grp.o 00:02:25.132 CC lib/iscsi/iscsi.o 00:02:25.132 CC lib/iscsi/md5.o 00:02:25.132 CC lib/iscsi/param.o 00:02:25.132 CC lib/vhost/vhost.o 00:02:25.132 CC lib/iscsi/portal_grp.o 00:02:25.132 CC lib/iscsi/tgt_node.o 00:02:25.132 CC lib/vhost/vhost_rpc.o 00:02:25.132 CC lib/iscsi/iscsi_subsystem.o 00:02:25.132 CC lib/vhost/vhost_scsi.o 00:02:25.132 CC lib/vhost/vhost_blk.o 00:02:25.132 CC lib/iscsi/iscsi_rpc.o 00:02:25.132 CC lib/iscsi/task.o 00:02:25.132 CC lib/vhost/rte_vhost_user.o 00:02:25.132 LIB libspdk_ftl.a 00:02:25.698 LIB libspdk_nvmf.a 00:02:25.698 LIB libspdk_vhost.a 00:02:25.956 LIB libspdk_iscsi.a 00:02:26.214 CC module/vfu_device/vfu_virtio.o 00:02:26.214 CC module/env_dpdk/env_dpdk_rpc.o 00:02:26.214 CC module/vfu_device/vfu_virtio_rpc.o 00:02:26.214 CC module/vfu_device/vfu_virtio_blk.o 00:02:26.214 CC module/vfu_device/vfu_virtio_scsi.o 00:02:26.477 CC module/accel/dsa/accel_dsa.o 00:02:26.477 CC module/accel/dsa/accel_dsa_rpc.o 00:02:26.477 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:26.477 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:26.477 CC module/scheduler/gscheduler/gscheduler.o 00:02:26.477 CC module/sock/posix/posix.o 00:02:26.477 CC module/accel/error/accel_error.o 00:02:26.477 CC module/accel/error/accel_error_rpc.o 00:02:26.477 CC module/accel/ioat/accel_ioat.o 00:02:26.477 CC module/accel/ioat/accel_ioat_rpc.o 00:02:26.477 CC module/blob/bdev/blob_bdev.o 00:02:26.477 CC module/keyring/file/keyring.o 00:02:26.477 CC module/keyring/linux/keyring.o 00:02:26.477 LIB libspdk_env_dpdk_rpc.a 00:02:26.477 CC module/keyring/file/keyring_rpc.o 00:02:26.477 CC module/keyring/linux/keyring_rpc.o 00:02:26.477 CC module/accel/iaa/accel_iaa.o 00:02:26.477 CC module/accel/iaa/accel_iaa_rpc.o 00:02:26.477 LIB libspdk_scheduler_dpdk_governor.a 00:02:26.477 LIB libspdk_keyring_linux.a 00:02:26.477 LIB libspdk_scheduler_gscheduler.a 00:02:26.477 LIB libspdk_keyring_file.a 00:02:26.477 LIB libspdk_scheduler_dynamic.a 00:02:26.477 LIB libspdk_accel_error.a 00:02:26.477 LIB libspdk_accel_ioat.a 00:02:26.477 LIB libspdk_accel_iaa.a 00:02:26.477 LIB libspdk_accel_dsa.a 00:02:26.477 LIB libspdk_blob_bdev.a 00:02:26.736 LIB libspdk_vfu_device.a 00:02:26.736 LIB libspdk_sock_posix.a 00:02:26.993 CC module/bdev/malloc/bdev_malloc.o 00:02:26.993 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:26.993 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:26.993 CC module/bdev/nvme/nvme_rpc.o 00:02:26.993 CC module/bdev/nvme/bdev_nvme.o 00:02:26.993 CC module/bdev/nvme/bdev_mdns_client.o 00:02:26.993 CC module/bdev/nvme/vbdev_opal.o 00:02:26.993 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:26.993 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:26.993 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:26.993 CC module/bdev/lvol/vbdev_lvol.o 00:02:26.993 CC module/bdev/error/vbdev_error.o 00:02:26.993 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:26.993 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:26.993 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:26.993 CC module/bdev/error/vbdev_error_rpc.o 00:02:26.993 CC module/bdev/delay/vbdev_delay.o 00:02:26.993 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:26.993 CC module/blobfs/bdev/blobfs_bdev.o 00:02:26.993 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:26.993 CC module/bdev/raid/bdev_raid.o 00:02:26.993 CC module/bdev/raid/bdev_raid_rpc.o 00:02:26.993 CC module/bdev/aio/bdev_aio.o 00:02:26.993 CC module/bdev/raid/raid0.o 00:02:26.993 CC module/bdev/raid/bdev_raid_sb.o 00:02:26.993 CC module/bdev/aio/bdev_aio_rpc.o 00:02:26.993 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:26.993 CC module/bdev/null/bdev_null.o 00:02:26.993 CC module/bdev/null/bdev_null_rpc.o 00:02:26.993 CC module/bdev/raid/raid1.o 00:02:26.993 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:26.993 CC module/bdev/raid/concat.o 00:02:26.993 CC module/bdev/split/vbdev_split_rpc.o 00:02:26.994 CC module/bdev/split/vbdev_split.o 00:02:26.994 CC module/bdev/iscsi/bdev_iscsi.o 00:02:26.994 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:26.994 CC module/bdev/gpt/vbdev_gpt.o 00:02:26.994 CC module/bdev/gpt/gpt.o 00:02:26.994 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:26.994 CC module/bdev/ftl/bdev_ftl.o 00:02:26.994 CC module/bdev/passthru/vbdev_passthru.o 00:02:26.994 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:26.994 LIB libspdk_blobfs_bdev.a 00:02:27.251 LIB libspdk_bdev_split.a 00:02:27.251 LIB libspdk_bdev_error.a 00:02:27.251 LIB libspdk_bdev_null.a 00:02:27.251 LIB libspdk_bdev_gpt.a 00:02:27.251 LIB libspdk_bdev_ftl.a 00:02:27.251 LIB libspdk_bdev_aio.a 00:02:27.251 LIB libspdk_bdev_passthru.a 00:02:27.251 LIB libspdk_bdev_zone_block.a 00:02:27.251 LIB libspdk_bdev_iscsi.a 00:02:27.251 LIB libspdk_bdev_malloc.a 00:02:27.251 LIB libspdk_bdev_delay.a 00:02:27.251 LIB libspdk_bdev_lvol.a 00:02:27.251 LIB libspdk_bdev_virtio.a 00:02:27.508 LIB libspdk_bdev_raid.a 00:02:28.443 LIB libspdk_bdev_nvme.a 00:02:28.702 CC module/event/subsystems/scheduler/scheduler.o 00:02:28.702 CC module/event/subsystems/keyring/keyring.o 00:02:28.702 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:28.702 CC module/event/subsystems/sock/sock.o 00:02:28.702 CC module/event/subsystems/iobuf/iobuf.o 00:02:28.702 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:28.702 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:28.702 CC module/event/subsystems/vmd/vmd.o 00:02:28.702 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:28.960 LIB libspdk_event_vhost_blk.a 00:02:28.960 LIB libspdk_event_vfu_tgt.a 00:02:28.960 LIB libspdk_event_keyring.a 00:02:28.960 LIB libspdk_event_scheduler.a 00:02:28.960 LIB libspdk_event_sock.a 00:02:28.960 LIB libspdk_event_vmd.a 00:02:28.960 LIB libspdk_event_iobuf.a 00:02:29.218 CC module/event/subsystems/accel/accel.o 00:02:29.218 LIB libspdk_event_accel.a 00:02:29.478 CC module/event/subsystems/bdev/bdev.o 00:02:29.737 LIB libspdk_event_bdev.a 00:02:29.995 CC module/event/subsystems/nbd/nbd.o 00:02:29.995 CC module/event/subsystems/scsi/scsi.o 00:02:29.995 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:29.995 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:29.995 CC module/event/subsystems/ublk/ublk.o 00:02:29.995 LIB libspdk_event_nbd.a 00:02:29.995 LIB libspdk_event_ublk.a 00:02:30.254 LIB libspdk_event_scsi.a 00:02:30.254 LIB libspdk_event_nvmf.a 00:02:30.512 CC module/event/subsystems/iscsi/iscsi.o 00:02:30.513 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:30.513 LIB libspdk_event_vhost_scsi.a 00:02:30.513 LIB libspdk_event_iscsi.a 00:02:30.772 CXX app/trace/trace.o 00:02:30.772 TEST_HEADER include/spdk/accel.h 00:02:30.772 TEST_HEADER include/spdk/accel_module.h 00:02:30.772 TEST_HEADER include/spdk/assert.h 00:02:30.772 TEST_HEADER include/spdk/bdev_module.h 00:02:30.772 TEST_HEADER include/spdk/barrier.h 00:02:30.772 TEST_HEADER include/spdk/bdev_zone.h 00:02:30.772 TEST_HEADER include/spdk/base64.h 00:02:30.772 TEST_HEADER include/spdk/bdev.h 00:02:30.772 TEST_HEADER include/spdk/bit_pool.h 00:02:30.772 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:30.772 TEST_HEADER include/spdk/blob_bdev.h 00:02:30.772 TEST_HEADER include/spdk/bit_array.h 00:02:30.772 TEST_HEADER include/spdk/blob.h 00:02:30.772 TEST_HEADER include/spdk/blobfs.h 00:02:30.772 CC app/spdk_top/spdk_top.o 00:02:30.772 TEST_HEADER include/spdk/conf.h 00:02:30.772 TEST_HEADER include/spdk/cpuset.h 00:02:30.772 TEST_HEADER include/spdk/config.h 00:02:30.772 TEST_HEADER include/spdk/crc32.h 00:02:30.772 TEST_HEADER include/spdk/crc64.h 00:02:30.772 TEST_HEADER include/spdk/crc16.h 00:02:30.772 TEST_HEADER include/spdk/dma.h 00:02:30.772 TEST_HEADER include/spdk/dif.h 00:02:30.772 TEST_HEADER include/spdk/endian.h 00:02:30.772 TEST_HEADER include/spdk/env_dpdk.h 00:02:30.772 TEST_HEADER include/spdk/env.h 00:02:30.772 CC app/trace_record/trace_record.o 00:02:30.772 TEST_HEADER include/spdk/event.h 00:02:30.772 CC app/spdk_lspci/spdk_lspci.o 00:02:30.772 TEST_HEADER include/spdk/fd_group.h 00:02:30.772 TEST_HEADER include/spdk/ftl.h 00:02:30.772 TEST_HEADER include/spdk/fd.h 00:02:30.772 TEST_HEADER include/spdk/file.h 00:02:30.772 CC app/spdk_nvme_perf/perf.o 00:02:30.772 TEST_HEADER include/spdk/hexlify.h 00:02:30.772 TEST_HEADER include/spdk/histogram_data.h 00:02:30.772 CC test/rpc_client/rpc_client_test.o 00:02:30.772 TEST_HEADER include/spdk/gpt_spec.h 00:02:30.772 TEST_HEADER include/spdk/idxd.h 00:02:30.772 TEST_HEADER include/spdk/ioat.h 00:02:30.772 TEST_HEADER include/spdk/init.h 00:02:30.772 TEST_HEADER include/spdk/idxd_spec.h 00:02:30.772 TEST_HEADER include/spdk/ioat_spec.h 00:02:30.772 TEST_HEADER include/spdk/iscsi_spec.h 00:02:30.772 TEST_HEADER include/spdk/jsonrpc.h 00:02:30.772 TEST_HEADER include/spdk/json.h 00:02:30.772 CC app/spdk_nvme_identify/identify.o 00:02:30.772 TEST_HEADER include/spdk/keyring.h 00:02:30.772 TEST_HEADER include/spdk/keyring_module.h 00:02:30.772 TEST_HEADER include/spdk/likely.h 00:02:30.772 TEST_HEADER include/spdk/log.h 00:02:30.772 TEST_HEADER include/spdk/memory.h 00:02:30.772 TEST_HEADER include/spdk/lvol.h 00:02:30.772 TEST_HEADER include/spdk/mmio.h 00:02:30.772 TEST_HEADER include/spdk/net.h 00:02:30.772 TEST_HEADER include/spdk/nbd.h 00:02:30.772 CC app/spdk_nvme_discover/discovery_aer.o 00:02:30.772 TEST_HEADER include/spdk/notify.h 00:02:30.772 TEST_HEADER include/spdk/nvme.h 00:02:30.772 TEST_HEADER include/spdk/nvme_intel.h 00:02:30.772 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:30.772 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:30.772 TEST_HEADER include/spdk/nvme_zns.h 00:02:30.772 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:30.772 TEST_HEADER include/spdk/nvme_spec.h 00:02:30.772 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:30.772 TEST_HEADER include/spdk/nvmf_spec.h 00:02:30.772 TEST_HEADER include/spdk/nvmf.h 00:02:30.772 TEST_HEADER include/spdk/opal.h 00:02:30.772 TEST_HEADER include/spdk/nvmf_transport.h 00:02:30.772 TEST_HEADER include/spdk/opal_spec.h 00:02:30.772 TEST_HEADER include/spdk/queue.h 00:02:30.772 TEST_HEADER include/spdk/pci_ids.h 00:02:30.772 TEST_HEADER include/spdk/pipe.h 00:02:30.772 TEST_HEADER include/spdk/reduce.h 00:02:30.772 TEST_HEADER include/spdk/scheduler.h 00:02:30.772 TEST_HEADER include/spdk/rpc.h 00:02:30.772 TEST_HEADER include/spdk/scsi_spec.h 00:02:30.772 TEST_HEADER include/spdk/scsi.h 00:02:30.772 TEST_HEADER include/spdk/sock.h 00:02:30.772 TEST_HEADER include/spdk/thread.h 00:02:30.772 TEST_HEADER include/spdk/stdinc.h 00:02:30.772 TEST_HEADER include/spdk/trace.h 00:02:30.772 TEST_HEADER include/spdk/string.h 00:02:30.772 TEST_HEADER include/spdk/tree.h 00:02:30.772 TEST_HEADER include/spdk/ublk.h 00:02:30.772 TEST_HEADER include/spdk/trace_parser.h 00:02:30.772 TEST_HEADER include/spdk/uuid.h 00:02:30.772 TEST_HEADER include/spdk/version.h 00:02:30.772 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:30.772 TEST_HEADER include/spdk/util.h 00:02:30.772 TEST_HEADER include/spdk/vmd.h 00:02:30.772 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:30.772 TEST_HEADER include/spdk/vhost.h 00:02:30.772 CXX test/cpp_headers/accel.o 00:02:30.772 CXX test/cpp_headers/accel_module.o 00:02:30.772 TEST_HEADER include/spdk/xor.h 00:02:30.772 CXX test/cpp_headers/assert.o 00:02:30.772 TEST_HEADER include/spdk/zipf.h 00:02:30.772 CXX test/cpp_headers/barrier.o 00:02:30.772 CXX test/cpp_headers/base64.o 00:02:30.772 CXX test/cpp_headers/bdev_module.o 00:02:30.772 CXX test/cpp_headers/bdev_zone.o 00:02:30.772 CXX test/cpp_headers/bdev.o 00:02:30.772 CXX test/cpp_headers/bit_pool.o 00:02:30.772 CXX test/cpp_headers/bit_array.o 00:02:30.772 CXX test/cpp_headers/blobfs_bdev.o 00:02:30.772 CXX test/cpp_headers/blobfs.o 00:02:30.772 CXX test/cpp_headers/blob_bdev.o 00:02:30.772 CXX test/cpp_headers/conf.o 00:02:30.772 CXX test/cpp_headers/cpuset.o 00:02:30.773 CXX test/cpp_headers/config.o 00:02:30.773 CXX test/cpp_headers/blob.o 00:02:30.773 CXX test/cpp_headers/crc64.o 00:02:30.773 CXX test/cpp_headers/crc32.o 00:02:30.773 CXX test/cpp_headers/crc16.o 00:02:30.773 CXX test/cpp_headers/dif.o 00:02:30.773 CC app/nvmf_tgt/nvmf_main.o 00:02:30.773 CXX test/cpp_headers/dma.o 00:02:30.773 CC app/iscsi_tgt/iscsi_tgt.o 00:02:30.773 CXX test/cpp_headers/env_dpdk.o 00:02:30.773 CXX test/cpp_headers/endian.o 00:02:30.773 CXX test/cpp_headers/event.o 00:02:30.773 CXX test/cpp_headers/env.o 00:02:30.773 CXX test/cpp_headers/fd_group.o 00:02:30.773 CC app/spdk_dd/spdk_dd.o 00:02:30.773 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:30.773 CXX test/cpp_headers/fd.o 00:02:30.773 CXX test/cpp_headers/file.o 00:02:30.773 CXX test/cpp_headers/ftl.o 00:02:30.773 CXX test/cpp_headers/gpt_spec.o 00:02:30.773 CXX test/cpp_headers/hexlify.o 00:02:30.773 CXX test/cpp_headers/idxd.o 00:02:30.773 CXX test/cpp_headers/histogram_data.o 00:02:30.773 CXX test/cpp_headers/idxd_spec.o 00:02:30.773 CXX test/cpp_headers/init.o 00:02:30.773 CXX test/cpp_headers/ioat.o 00:02:30.773 CXX test/cpp_headers/ioat_spec.o 00:02:30.773 CXX test/cpp_headers/iscsi_spec.o 00:02:30.773 CXX test/cpp_headers/jsonrpc.o 00:02:30.773 CXX test/cpp_headers/json.o 00:02:30.773 CXX test/cpp_headers/keyring.o 00:02:30.773 CXX test/cpp_headers/keyring_module.o 00:02:30.773 CXX test/cpp_headers/likely.o 00:02:30.773 CXX test/cpp_headers/log.o 00:02:30.773 CXX test/cpp_headers/lvol.o 00:02:30.773 CXX test/cpp_headers/memory.o 00:02:30.773 CXX test/cpp_headers/mmio.o 00:02:30.773 CXX test/cpp_headers/nbd.o 00:02:30.773 CXX test/cpp_headers/net.o 00:02:30.773 CXX test/cpp_headers/notify.o 00:02:30.773 CXX test/cpp_headers/nvme.o 00:02:30.773 CXX test/cpp_headers/nvme_intel.o 00:02:30.773 CXX test/cpp_headers/nvme_ocssd.o 00:02:30.773 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:30.773 CXX test/cpp_headers/nvme_spec.o 00:02:30.773 CXX test/cpp_headers/nvme_zns.o 00:02:31.032 CC app/spdk_tgt/spdk_tgt.o 00:02:31.032 CC test/thread/poller_perf/poller_perf.o 00:02:31.032 CC test/env/memory/memory_ut.o 00:02:31.032 CXX test/cpp_headers/nvmf_cmd.o 00:02:31.032 CC test/thread/lock/spdk_lock.o 00:02:31.032 CC test/env/pci/pci_ut.o 00:02:31.032 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:31.032 CC test/app/histogram_perf/histogram_perf.o 00:02:31.032 CC examples/util/zipf/zipf.o 00:02:31.032 CC examples/ioat/perf/perf.o 00:02:31.032 CC test/app/stub/stub.o 00:02:31.032 CC test/env/vtophys/vtophys.o 00:02:31.032 CC test/app/jsoncat/jsoncat.o 00:02:31.032 CC app/fio/nvme/fio_plugin.o 00:02:31.032 CC examples/ioat/verify/verify.o 00:02:31.032 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:31.032 CC test/dma/test_dma/test_dma.o 00:02:31.032 LINK spdk_lspci 00:02:31.032 CC test/app/bdev_svc/bdev_svc.o 00:02:31.032 CC app/fio/bdev/fio_plugin.o 00:02:31.032 CC test/env/mem_callbacks/mem_callbacks.o 00:02:31.032 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:31.032 LINK rpc_client_test 00:02:31.032 CXX test/cpp_headers/nvmf.o 00:02:31.032 CXX test/cpp_headers/nvmf_spec.o 00:02:31.032 LINK spdk_nvme_discover 00:02:31.032 CXX test/cpp_headers/nvmf_transport.o 00:02:31.032 CXX test/cpp_headers/opal.o 00:02:31.032 CXX test/cpp_headers/opal_spec.o 00:02:31.032 CXX test/cpp_headers/pci_ids.o 00:02:31.032 CXX test/cpp_headers/pipe.o 00:02:31.032 CXX test/cpp_headers/queue.o 00:02:31.032 LINK spdk_trace_record 00:02:31.032 CXX test/cpp_headers/reduce.o 00:02:31.032 CXX test/cpp_headers/rpc.o 00:02:31.032 CXX test/cpp_headers/scheduler.o 00:02:31.032 CXX test/cpp_headers/scsi.o 00:02:31.032 CXX test/cpp_headers/scsi_spec.o 00:02:31.032 CXX test/cpp_headers/sock.o 00:02:31.032 CXX test/cpp_headers/stdinc.o 00:02:31.032 CXX test/cpp_headers/string.o 00:02:31.032 CXX test/cpp_headers/thread.o 00:02:31.032 CXX test/cpp_headers/trace.o 00:02:31.032 CXX test/cpp_headers/trace_parser.o 00:02:31.032 CXX test/cpp_headers/tree.o 00:02:31.032 CXX test/cpp_headers/ublk.o 00:02:31.032 CXX test/cpp_headers/util.o 00:02:31.032 CXX test/cpp_headers/uuid.o 00:02:31.032 CXX test/cpp_headers/version.o 00:02:31.032 CXX test/cpp_headers/vfio_user_pci.o 00:02:31.032 CXX test/cpp_headers/vfio_user_spec.o 00:02:31.032 CXX test/cpp_headers/vhost.o 00:02:31.032 CXX test/cpp_headers/vmd.o 00:02:31.032 CXX test/cpp_headers/xor.o 00:02:31.032 CXX test/cpp_headers/zipf.o 00:02:31.032 LINK poller_perf 00:02:31.032 LINK histogram_perf 00:02:31.032 LINK vtophys 00:02:31.032 LINK nvmf_tgt 00:02:31.032 LINK jsoncat 00:02:31.032 LINK zipf 00:02:31.032 LINK interrupt_tgt 00:02:31.032 LINK env_dpdk_post_init 00:02:31.290 LINK iscsi_tgt 00:02:31.290 LINK stub 00:02:31.290 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:31.290 LINK spdk_tgt 00:02:31.290 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:31.290 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:31.290 LINK verify 00:02:31.290 LINK ioat_perf 00:02:31.290 LINK bdev_svc 00:02:31.290 LINK spdk_trace 00:02:31.290 CC test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.o 00:02:31.290 CC test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.o 00:02:31.290 LINK test_dma 00:02:31.290 LINK spdk_dd 00:02:31.548 LINK pci_ut 00:02:31.548 LINK nvme_fuzz 00:02:31.548 LINK llvm_vfio_fuzz 00:02:31.548 LINK spdk_nvme 00:02:31.548 LINK spdk_nvme_identify 00:02:31.548 LINK spdk_nvme_perf 00:02:31.548 LINK spdk_bdev 00:02:31.548 LINK mem_callbacks 00:02:31.548 LINK vhost_fuzz 00:02:31.806 CC examples/vmd/lsvmd/lsvmd.o 00:02:31.806 CC examples/vmd/led/led.o 00:02:31.806 LINK spdk_top 00:02:31.806 CC examples/sock/hello_world/hello_sock.o 00:02:31.806 CC app/vhost/vhost.o 00:02:31.806 CC examples/idxd/perf/perf.o 00:02:31.806 CC examples/thread/thread/thread_ex.o 00:02:31.806 LINK lsvmd 00:02:31.806 LINK llvm_nvme_fuzz 00:02:31.806 LINK led 00:02:31.806 LINK memory_ut 00:02:31.806 LINK vhost 00:02:31.806 LINK hello_sock 00:02:32.064 LINK thread 00:02:32.064 LINK idxd_perf 00:02:32.064 LINK spdk_lock 00:02:32.321 LINK iscsi_fuzz 00:02:32.579 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:32.579 CC examples/nvme/reconnect/reconnect.o 00:02:32.579 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:32.579 CC examples/nvme/abort/abort.o 00:02:32.580 CC examples/nvme/hotplug/hotplug.o 00:02:32.580 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:32.580 CC examples/nvme/hello_world/hello_world.o 00:02:32.580 CC examples/nvme/arbitration/arbitration.o 00:02:32.837 LINK pmr_persistence 00:02:32.837 LINK cmb_copy 00:02:32.837 LINK hello_world 00:02:32.837 LINK hotplug 00:02:32.837 CC test/event/event_perf/event_perf.o 00:02:32.837 CC test/event/reactor_perf/reactor_perf.o 00:02:32.837 CC test/event/reactor/reactor.o 00:02:32.837 CC test/event/app_repeat/app_repeat.o 00:02:32.837 CC test/event/scheduler/scheduler.o 00:02:32.837 LINK reconnect 00:02:32.837 LINK abort 00:02:32.837 LINK arbitration 00:02:32.837 LINK nvme_manage 00:02:32.837 LINK reactor_perf 00:02:32.837 LINK event_perf 00:02:32.837 LINK reactor 00:02:33.095 LINK app_repeat 00:02:33.095 LINK scheduler 00:02:33.095 CC test/nvme/sgl/sgl.o 00:02:33.095 CC test/nvme/aer/aer.o 00:02:33.095 CC test/nvme/reset/reset.o 00:02:33.095 CC test/nvme/overhead/overhead.o 00:02:33.095 CC test/nvme/simple_copy/simple_copy.o 00:02:33.095 CC test/nvme/cuse/cuse.o 00:02:33.095 CC test/nvme/err_injection/err_injection.o 00:02:33.095 CC test/nvme/connect_stress/connect_stress.o 00:02:33.095 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:33.095 CC test/nvme/e2edp/nvme_dp.o 00:02:33.095 CC test/nvme/fused_ordering/fused_ordering.o 00:02:33.095 CC test/nvme/startup/startup.o 00:02:33.095 CC test/nvme/reserve/reserve.o 00:02:33.095 CC test/nvme/fdp/fdp.o 00:02:33.095 CC test/nvme/boot_partition/boot_partition.o 00:02:33.095 CC test/nvme/compliance/nvme_compliance.o 00:02:33.095 CC test/accel/dif/dif.o 00:02:33.095 CC test/blobfs/mkfs/mkfs.o 00:02:33.095 CC test/lvol/esnap/esnap.o 00:02:33.353 LINK startup 00:02:33.353 LINK connect_stress 00:02:33.353 LINK err_injection 00:02:33.353 LINK boot_partition 00:02:33.353 LINK doorbell_aers 00:02:33.353 LINK fused_ordering 00:02:33.353 LINK reserve 00:02:33.353 LINK simple_copy 00:02:33.353 LINK sgl 00:02:33.353 LINK reset 00:02:33.353 LINK aer 00:02:33.353 LINK nvme_dp 00:02:33.353 LINK overhead 00:02:33.353 LINK mkfs 00:02:33.353 LINK fdp 00:02:33.353 LINK nvme_compliance 00:02:33.353 LINK dif 00:02:33.612 CC examples/accel/perf/accel_perf.o 00:02:33.612 CC examples/blob/cli/blobcli.o 00:02:33.612 CC examples/blob/hello_world/hello_blob.o 00:02:33.869 LINK hello_blob 00:02:33.869 LINK accel_perf 00:02:33.869 LINK blobcli 00:02:34.136 LINK cuse 00:02:34.783 CC examples/bdev/hello_world/hello_bdev.o 00:02:34.783 CC examples/bdev/bdevperf/bdevperf.o 00:02:34.783 LINK hello_bdev 00:02:35.078 CC test/bdev/bdevio/bdevio.o 00:02:35.078 LINK bdevperf 00:02:35.346 LINK bdevio 00:02:36.762 CC examples/nvmf/nvmf/nvmf.o 00:02:36.762 LINK esnap 00:02:36.762 LINK nvmf 00:02:37.697 00:02:37.697 real 0m41.850s 00:02:37.697 user 6m5.252s 00:02:37.697 sys 2m3.416s 00:02:37.697 22:41:35 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:02:37.697 22:41:35 make -- common/autotest_common.sh@10 -- $ set +x 00:02:37.697 ************************************ 00:02:37.697 END TEST make 00:02:37.697 ************************************ 00:02:37.955 22:41:35 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:37.955 22:41:35 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:37.955 22:41:35 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:37.955 22:41:35 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:37.955 22:41:35 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:37.955 22:41:35 -- pm/common@44 -- $ pid=344603 00:02:37.955 22:41:35 -- pm/common@50 -- $ kill -TERM 344603 00:02:37.955 22:41:35 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:37.955 22:41:35 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:37.955 22:41:35 -- pm/common@44 -- $ pid=344605 00:02:37.955 22:41:35 -- pm/common@50 -- $ kill -TERM 344605 00:02:37.955 22:41:35 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:37.955 22:41:35 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:37.955 22:41:35 -- pm/common@44 -- $ pid=344606 00:02:37.955 22:41:35 -- pm/common@50 -- $ kill -TERM 344606 00:02:37.955 22:41:35 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:37.955 22:41:35 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:37.955 22:41:35 -- pm/common@44 -- $ pid=344629 00:02:37.955 22:41:35 -- pm/common@50 -- $ sudo -E kill -TERM 344629 00:02:37.955 22:41:36 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh 00:02:37.955 22:41:36 -- nvmf/common.sh@7 -- # uname -s 00:02:37.955 22:41:36 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:37.955 22:41:36 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:37.955 22:41:36 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:37.955 22:41:36 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:37.955 22:41:36 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:37.955 22:41:36 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:37.955 22:41:36 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:37.955 22:41:36 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:37.955 22:41:36 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:37.955 22:41:36 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:37.955 22:41:36 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0055cbfc-b15d-ea11-906e-0017a4403562 00:02:37.956 22:41:36 -- nvmf/common.sh@18 -- # NVME_HOSTID=0055cbfc-b15d-ea11-906e-0017a4403562 00:02:37.956 22:41:36 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:37.956 22:41:36 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:37.956 22:41:36 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:02:37.956 22:41:36 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:37.956 22:41:36 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:02:37.956 22:41:36 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:37.956 22:41:36 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:37.956 22:41:36 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:37.956 22:41:36 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:37.956 22:41:36 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:37.956 22:41:36 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:37.956 22:41:36 -- paths/export.sh@5 -- # export PATH 00:02:37.956 22:41:36 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:37.956 22:41:36 -- nvmf/common.sh@47 -- # : 0 00:02:37.956 22:41:36 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:02:37.956 22:41:36 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:02:37.956 22:41:36 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:37.956 22:41:36 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:37.956 22:41:36 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:37.956 22:41:36 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:02:37.956 22:41:36 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:02:37.956 22:41:36 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:02:37.956 22:41:36 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:37.956 22:41:36 -- spdk/autotest.sh@32 -- # uname -s 00:02:37.956 22:41:36 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:37.956 22:41:36 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:37.956 22:41:36 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/coredumps 00:02:37.956 22:41:36 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:37.956 22:41:36 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/coredumps 00:02:37.956 22:41:36 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:37.956 22:41:36 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:37.956 22:41:36 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:37.956 22:41:36 -- spdk/autotest.sh@48 -- # udevadm_pid=403977 00:02:37.956 22:41:36 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:37.956 22:41:36 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:37.956 22:41:36 -- pm/common@17 -- # local monitor 00:02:37.956 22:41:36 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:37.956 22:41:36 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:37.956 22:41:36 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:37.956 22:41:36 -- pm/common@21 -- # date +%s 00:02:37.956 22:41:36 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:37.956 22:41:36 -- pm/common@21 -- # date +%s 00:02:37.956 22:41:36 -- pm/common@25 -- # sleep 1 00:02:37.956 22:41:36 -- pm/common@21 -- # date +%s 00:02:37.956 22:41:36 -- pm/common@21 -- # date +%s 00:02:37.956 22:41:36 -- pm/common@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721853696 00:02:37.956 22:41:36 -- pm/common@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721853696 00:02:37.956 22:41:36 -- pm/common@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721853696 00:02:37.956 22:41:36 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721853696 00:02:37.956 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721853696_collect-vmstat.pm.log 00:02:37.956 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721853696_collect-cpu-load.pm.log 00:02:37.956 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721853696_collect-cpu-temp.pm.log 00:02:37.956 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721853696_collect-bmc-pm.bmc.pm.log 00:02:38.895 22:41:37 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:38.895 22:41:37 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:38.895 22:41:37 -- common/autotest_common.sh@724 -- # xtrace_disable 00:02:38.895 22:41:37 -- common/autotest_common.sh@10 -- # set +x 00:02:39.153 22:41:37 -- spdk/autotest.sh@59 -- # create_test_list 00:02:39.153 22:41:37 -- common/autotest_common.sh@748 -- # xtrace_disable 00:02:39.153 22:41:37 -- common/autotest_common.sh@10 -- # set +x 00:02:39.153 22:41:37 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/autotest.sh 00:02:39.153 22:41:37 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:02:39.153 22:41:37 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:02:39.153 22:41:37 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output 00:02:39.153 22:41:37 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:02:39.153 22:41:37 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:39.153 22:41:37 -- common/autotest_common.sh@1455 -- # uname 00:02:39.153 22:41:37 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:02:39.153 22:41:37 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:39.153 22:41:37 -- common/autotest_common.sh@1475 -- # uname 00:02:39.153 22:41:37 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:02:39.153 22:41:37 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:02:39.153 22:41:37 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=clang 00:02:39.153 22:41:37 -- spdk/autotest.sh@72 -- # hash lcov 00:02:39.153 22:41:37 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=clang == *\c\l\a\n\g* ]] 00:02:39.153 22:41:37 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:02:39.153 22:41:37 -- common/autotest_common.sh@724 -- # xtrace_disable 00:02:39.153 22:41:37 -- common/autotest_common.sh@10 -- # set +x 00:02:39.153 22:41:37 -- spdk/autotest.sh@91 -- # rm -f 00:02:39.153 22:41:37 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:02:43.341 0000:dd:00.0 (8086 0a54): Already using the nvme driver 00:02:43.341 0000:df:00.0 (8086 0a54): Already using the nvme driver 00:02:43.341 0000:de:00.0 (8086 0953): Already using the nvme driver 00:02:43.342 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:02:43.342 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:02:43.342 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:02:43.342 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:02:43.342 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:02:43.342 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:02:43.342 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:02:43.342 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:02:43.342 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:02:43.342 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:02:43.342 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:02:43.342 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:02:43.342 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:02:43.342 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:02:43.342 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:02:43.342 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:02:43.342 0000:dc:00.0 (8086 0953): Already using the nvme driver 00:02:44.275 22:41:42 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:02:44.275 22:41:42 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:02:44.275 22:41:42 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:02:44.275 22:41:42 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:02:44.275 22:41:42 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:02:44.275 22:41:42 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:02:44.275 22:41:42 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:02:44.275 22:41:42 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:44.275 22:41:42 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:02:44.275 22:41:42 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:02:44.275 22:41:42 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:02:44.275 22:41:42 -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:02:44.275 22:41:42 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:02:44.275 22:41:42 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:02:44.275 22:41:42 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:02:44.275 22:41:42 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n1 00:02:44.275 22:41:42 -- common/autotest_common.sh@1662 -- # local device=nvme2n1 00:02:44.275 22:41:42 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:02:44.275 22:41:42 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:02:44.275 22:41:42 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:02:44.275 22:41:42 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3n1 00:02:44.275 22:41:42 -- common/autotest_common.sh@1662 -- # local device=nvme3n1 00:02:44.275 22:41:42 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:02:44.275 22:41:42 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:02:44.275 22:41:42 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:02:44.275 22:41:42 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:02:44.275 22:41:42 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:02:44.275 22:41:42 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:02:44.275 22:41:42 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:02:44.275 22:41:42 -- scripts/common.sh@387 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:02:44.533 No valid GPT data, bailing 00:02:44.533 22:41:42 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:02:44.533 22:41:42 -- scripts/common.sh@391 -- # pt= 00:02:44.533 22:41:42 -- scripts/common.sh@392 -- # return 1 00:02:44.533 22:41:42 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:02:44.533 1+0 records in 00:02:44.533 1+0 records out 00:02:44.533 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00426275 s, 246 MB/s 00:02:44.533 22:41:42 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:02:44.533 22:41:42 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:02:44.533 22:41:42 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n1 00:02:44.533 22:41:42 -- scripts/common.sh@378 -- # local block=/dev/nvme1n1 pt 00:02:44.533 22:41:42 -- scripts/common.sh@387 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:02:44.533 No valid GPT data, bailing 00:02:44.533 22:41:42 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:02:44.533 22:41:42 -- scripts/common.sh@391 -- # pt= 00:02:44.533 22:41:42 -- scripts/common.sh@392 -- # return 1 00:02:44.533 22:41:42 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:02:44.533 1+0 records in 00:02:44.533 1+0 records out 00:02:44.533 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00662535 s, 158 MB/s 00:02:44.533 22:41:42 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:02:44.533 22:41:42 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:02:44.533 22:41:42 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme2n1 00:02:44.533 22:41:42 -- scripts/common.sh@378 -- # local block=/dev/nvme2n1 pt 00:02:44.533 22:41:42 -- scripts/common.sh@387 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:02:44.533 No valid GPT data, bailing 00:02:44.533 22:41:42 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:02:44.533 22:41:42 -- scripts/common.sh@391 -- # pt= 00:02:44.533 22:41:42 -- scripts/common.sh@392 -- # return 1 00:02:44.533 22:41:42 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:02:44.533 1+0 records in 00:02:44.533 1+0 records out 00:02:44.533 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00429542 s, 244 MB/s 00:02:44.533 22:41:42 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:02:44.533 22:41:42 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:02:44.533 22:41:42 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme3n1 00:02:44.533 22:41:42 -- scripts/common.sh@378 -- # local block=/dev/nvme3n1 pt 00:02:44.533 22:41:42 -- scripts/common.sh@387 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:02:44.533 No valid GPT data, bailing 00:02:44.533 22:41:42 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:02:44.533 22:41:42 -- scripts/common.sh@391 -- # pt= 00:02:44.533 22:41:42 -- scripts/common.sh@392 -- # return 1 00:02:44.533 22:41:42 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:02:44.533 1+0 records in 00:02:44.534 1+0 records out 00:02:44.534 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00415664 s, 252 MB/s 00:02:44.534 22:41:42 -- spdk/autotest.sh@118 -- # sync 00:02:44.534 22:41:42 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:02:44.534 22:41:42 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:02:44.534 22:41:42 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:02:49.802 22:41:47 -- spdk/autotest.sh@124 -- # uname -s 00:02:49.802 22:41:47 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:02:49.803 22:41:47 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/test-setup.sh 00:02:49.803 22:41:47 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:02:49.803 22:41:47 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:02:49.803 22:41:47 -- common/autotest_common.sh@10 -- # set +x 00:02:49.803 ************************************ 00:02:49.803 START TEST setup.sh 00:02:49.803 ************************************ 00:02:49.803 22:41:47 setup.sh -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/test-setup.sh 00:02:49.803 * Looking for test storage... 00:02:49.803 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:02:49.803 22:41:47 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:02:49.803 22:41:47 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:02:49.803 22:41:47 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/acl.sh 00:02:49.803 22:41:47 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:02:49.803 22:41:47 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:02:49.803 22:41:47 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:02:49.803 ************************************ 00:02:49.803 START TEST acl 00:02:49.803 ************************************ 00:02:49.803 22:41:47 setup.sh.acl -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/acl.sh 00:02:49.803 * Looking for test storage... 00:02:49.803 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:02:49.803 22:41:47 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:02:49.803 22:41:47 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:02:49.803 22:41:47 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:02:49.803 22:41:47 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:02:49.803 22:41:47 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:02:49.803 22:41:47 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:02:49.803 22:41:47 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:02:49.803 22:41:47 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:49.803 22:41:47 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:02:49.803 22:41:47 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:02:49.803 22:41:47 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:02:49.803 22:41:47 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:02:49.803 22:41:47 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:02:49.803 22:41:47 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:02:49.803 22:41:47 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:02:49.803 22:41:47 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n1 00:02:49.803 22:41:47 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme2n1 00:02:49.803 22:41:47 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:02:49.803 22:41:47 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:02:49.803 22:41:47 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:02:49.803 22:41:47 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3n1 00:02:49.803 22:41:47 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme3n1 00:02:49.803 22:41:47 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:02:49.803 22:41:47 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:02:49.803 22:41:47 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:02:49.803 22:41:47 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:02:49.803 22:41:47 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:02:49.803 22:41:47 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:02:49.803 22:41:47 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:02:49.803 22:41:47 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:49.803 22:41:47 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:02:55.073 22:41:52 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:02:55.073 22:41:52 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:02:55.073 22:41:53 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:55.073 22:41:53 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:02:55.073 22:41:53 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:02:55.073 22:41:53 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh status 00:02:58.357 Hugepages 00:02:58.357 node hugesize free / total 00:02:58.357 22:41:56 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:58.357 22:41:56 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:58.357 22:41:56 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:58.357 22:41:56 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:58.357 22:41:56 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:58.357 22:41:56 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:58.357 22:41:56 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:58.357 22:41:56 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:58.357 22:41:56 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:58.357 00:02:58.357 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:58.357 22:41:56 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:58.357 22:41:56 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:58.357 22:41:56 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:58.357 22:41:56 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:02:58.357 22:41:56 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:58.357 22:41:56 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:58.357 22:41:56 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:58.357 22:41:56 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:02:58.357 22:41:56 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:58.357 22:41:56 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:58.357 22:41:56 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:58.357 22:41:56 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:02:58.357 22:41:56 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:58.357 22:41:56 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:58.357 22:41:56 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:58.357 22:41:56 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:02:58.357 22:41:56 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:58.357 22:41:56 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:58.357 22:41:56 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:58.357 22:41:56 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:02:58.357 22:41:56 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:58.357 22:41:56 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:58.357 22:41:56 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:58.357 22:41:56 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:02:58.358 22:41:56 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:58.358 22:41:56 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:58.358 22:41:56 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:58.358 22:41:56 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:02:58.358 22:41:56 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:58.358 22:41:56 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:58.358 22:41:56 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:58.358 22:41:56 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:02:58.358 22:41:56 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:58.358 22:41:56 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:58.358 22:41:56 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:58.358 22:41:56 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:02:58.358 22:41:56 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:58.358 22:41:56 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:58.358 22:41:56 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:58.358 22:41:56 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:02:58.358 22:41:56 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:58.358 22:41:56 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:58.358 22:41:56 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:58.358 22:41:56 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:02:58.358 22:41:56 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:58.358 22:41:56 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:58.358 22:41:56 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:58.358 22:41:56 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:02:58.358 22:41:56 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:58.358 22:41:56 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:58.358 22:41:56 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:58.358 22:41:56 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:02:58.358 22:41:56 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:58.358 22:41:56 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:58.358 22:41:56 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:58.358 22:41:56 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:02:58.358 22:41:56 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:58.358 22:41:56 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:58.358 22:41:56 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:58.358 22:41:56 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:02:58.358 22:41:56 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:58.358 22:41:56 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:58.358 22:41:56 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:58.358 22:41:56 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:02:58.358 22:41:56 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:58.358 22:41:56 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:58.358 22:41:56 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:58.616 22:41:56 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:dc:00.0 == *:*:*.* ]] 00:02:58.616 22:41:56 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:02:58.616 22:41:56 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\d\c\:\0\0\.\0* ]] 00:02:58.616 22:41:56 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:02:58.616 22:41:56 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:02:58.616 22:41:56 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:58.616 22:41:56 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:dd:00.0 == *:*:*.* ]] 00:02:58.616 22:41:56 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:02:58.616 22:41:56 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\d\d\:\0\0\.\0* ]] 00:02:58.617 22:41:56 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:02:58.617 22:41:56 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:02:58.617 22:41:56 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:58.617 22:41:56 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:de:00.0 == *:*:*.* ]] 00:02:58.617 22:41:56 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:02:58.617 22:41:56 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\d\e\:\0\0\.\0* ]] 00:02:58.617 22:41:56 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:02:58.617 22:41:56 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:02:58.617 22:41:56 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:58.875 22:41:56 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:df:00.0 == *:*:*.* ]] 00:02:58.875 22:41:56 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:02:58.875 22:41:56 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\d\f\:\0\0\.\0* ]] 00:02:58.875 22:41:56 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:02:58.875 22:41:56 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:02:58.875 22:41:56 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:58.875 22:41:56 setup.sh.acl -- setup/acl.sh@24 -- # (( 4 > 0 )) 00:02:58.875 22:41:56 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:02:58.875 22:41:56 setup.sh.acl -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:02:58.875 22:41:56 setup.sh.acl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:02:58.875 22:41:56 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:02:58.875 ************************************ 00:02:58.875 START TEST denied 00:02:58.875 ************************************ 00:02:58.875 22:41:56 setup.sh.acl.denied -- common/autotest_common.sh@1125 -- # denied 00:02:58.875 22:41:56 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:dc:00.0' 00:02:58.875 22:41:56 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:02:58.875 22:41:56 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:dc:00.0' 00:02:58.875 22:41:56 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:02:58.875 22:41:56 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:03:06.992 0000:dc:00.0 (8086 0953): Skipping denied controller at 0000:dc:00.0 00:03:06.992 22:42:04 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:dc:00.0 00:03:06.992 22:42:04 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:03:06.992 22:42:04 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:03:06.992 22:42:04 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:dc:00.0 ]] 00:03:06.992 22:42:04 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:dc:00.0/driver 00:03:06.992 22:42:04 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:06.992 22:42:04 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:06.992 22:42:04 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:03:06.992 22:42:04 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:06.992 22:42:04 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:03:15.109 00:03:15.109 real 0m16.103s 00:03:15.109 user 0m3.700s 00:03:15.109 sys 0m6.867s 00:03:15.109 22:42:13 setup.sh.acl.denied -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:15.109 22:42:13 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:03:15.109 ************************************ 00:03:15.110 END TEST denied 00:03:15.110 ************************************ 00:03:15.110 22:42:13 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:15.110 22:42:13 setup.sh.acl -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:15.110 22:42:13 setup.sh.acl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:15.110 22:42:13 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:15.110 ************************************ 00:03:15.110 START TEST allowed 00:03:15.110 ************************************ 00:03:15.110 22:42:13 setup.sh.acl.allowed -- common/autotest_common.sh@1125 -- # allowed 00:03:15.110 22:42:13 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:dc:00.0 .*: nvme -> .*' 00:03:15.110 22:42:13 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:dc:00.0 00:03:15.110 22:42:13 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:03:15.110 22:42:13 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:03:15.110 22:42:13 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:03:23.224 0000:dc:00.0 (8086 0953): nvme -> vfio-pci 00:03:23.224 22:42:20 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 0000:dd:00.0 0000:de:00.0 0000:df:00.0 00:03:23.224 22:42:20 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:03:23.224 22:42:20 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:03:23.224 22:42:20 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:dd:00.0 ]] 00:03:23.224 22:42:20 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:dd:00.0/driver 00:03:23.224 22:42:20 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:23.225 22:42:20 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:23.225 22:42:20 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:03:23.225 22:42:20 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:de:00.0 ]] 00:03:23.225 22:42:20 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:de:00.0/driver 00:03:23.225 22:42:20 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:23.225 22:42:20 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:23.225 22:42:20 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:03:23.225 22:42:20 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:df:00.0 ]] 00:03:23.225 22:42:20 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:df:00.0/driver 00:03:23.225 22:42:20 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:23.225 22:42:20 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:23.225 22:42:20 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:03:23.225 22:42:20 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:23.225 22:42:20 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:03:31.345 00:03:31.345 real 0m15.056s 00:03:31.345 user 0m3.796s 00:03:31.345 sys 0m6.477s 00:03:31.345 22:42:28 setup.sh.acl.allowed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:31.345 22:42:28 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:03:31.345 ************************************ 00:03:31.345 END TEST allowed 00:03:31.345 ************************************ 00:03:31.345 00:03:31.345 real 0m40.568s 00:03:31.345 user 0m10.978s 00:03:31.345 sys 0m19.470s 00:03:31.345 22:42:28 setup.sh.acl -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:31.345 22:42:28 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:31.345 ************************************ 00:03:31.345 END TEST acl 00:03:31.345 ************************************ 00:03:31.345 22:42:28 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/hugepages.sh 00:03:31.345 22:42:28 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:31.345 22:42:28 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:31.345 22:42:28 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:31.345 ************************************ 00:03:31.345 START TEST hugepages 00:03:31.345 ************************************ 00:03:31.345 22:42:28 setup.sh.hugepages -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/hugepages.sh 00:03:31.345 * Looking for test storage... 00:03:31.345 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:03:31.345 22:42:28 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:31.345 22:42:28 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:31.345 22:42:28 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:31.345 22:42:28 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:31.345 22:42:28 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:31.345 22:42:28 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:31.345 22:42:28 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:31.345 22:42:28 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:03:31.345 22:42:28 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:03:31.345 22:42:28 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:03:31.345 22:42:28 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:31.345 22:42:28 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:31.345 22:42:28 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:31.346 22:42:28 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:03:31.346 22:42:28 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:31.346 22:42:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.346 22:42:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.346 22:42:28 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60294964 kB' 'MemFree: 42426392 kB' 'MemAvailable: 45847576 kB' 'Buffers: 11052 kB' 'Cached: 10787552 kB' 'SwapCached: 0 kB' 'Active: 7894760 kB' 'Inactive: 3426680 kB' 'Active(anon): 7503092 kB' 'Inactive(anon): 0 kB' 'Active(file): 391668 kB' 'Inactive(file): 3426680 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 526340 kB' 'Mapped: 157536 kB' 'Shmem: 6980256 kB' 'KReclaimable: 199976 kB' 'Slab: 607120 kB' 'SReclaimable: 199976 kB' 'SUnreclaim: 407144 kB' 'KernelStack: 18768 kB' 'PageTables: 7876 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36438932 kB' 'Committed_AS: 8843420 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 207528 kB' 'VmallocChunk: 0 kB' 'Percpu: 59904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 445396 kB' 'DirectMap2M: 10768384 kB' 'DirectMap1G: 57671680 kB' 00:03:31.346 22:42:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.346 22:42:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.346 22:42:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.346 22:42:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.346 22:42:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.346 22:42:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.346 22:42:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.346 22:42:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.346 22:42:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.346 22:42:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.346 22:42:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.346 22:42:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.346 22:42:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.346 22:42:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.346 22:42:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.346 22:42:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.346 22:42:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.346 22:42:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.346 22:42:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.346 22:42:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.346 22:42:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.346 22:42:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.346 22:42:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.346 22:42:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.346 22:42:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.346 22:42:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.346 22:42:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.346 22:42:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.346 22:42:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.346 22:42:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.346 22:42:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.346 22:42:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.346 22:42:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.346 22:42:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.346 22:42:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.346 22:42:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.346 22:42:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.346 22:42:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.346 22:42:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.346 22:42:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.346 22:42:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.346 22:42:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.346 22:42:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.346 22:42:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.346 22:42:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.346 22:42:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.346 22:42:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.346 22:42:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.346 22:42:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.346 22:42:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.346 22:42:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.346 22:42:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.346 22:42:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.346 22:42:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.346 22:42:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.346 22:42:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.346 22:42:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.346 22:42:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.346 22:42:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.346 22:42:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.346 22:42:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.346 22:42:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.346 22:42:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.346 22:42:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.346 22:42:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.346 22:42:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.346 22:42:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.346 22:42:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.346 22:42:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.346 22:42:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.346 22:42:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.346 22:42:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.346 22:42:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.346 22:42:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.346 22:42:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.346 22:42:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.346 22:42:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.346 22:42:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.346 22:42:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.346 22:42:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.346 22:42:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.346 22:42:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.346 22:42:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.346 22:42:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.346 22:42:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.346 22:42:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.346 22:42:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.346 22:42:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.346 22:42:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.346 22:42:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.346 22:42:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.346 22:42:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.346 22:42:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.346 22:42:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.346 22:42:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.346 22:42:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.346 22:42:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.346 22:42:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.346 22:42:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.346 22:42:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.346 22:42:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.346 22:42:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.346 22:42:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.347 22:42:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.347 22:42:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.347 22:42:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.347 22:42:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.347 22:42:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.347 22:42:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.347 22:42:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.347 22:42:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.347 22:42:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.347 22:42:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.347 22:42:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.347 22:42:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.347 22:42:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.347 22:42:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.347 22:42:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.347 22:42:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.347 22:42:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.347 22:42:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.347 22:42:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.347 22:42:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.347 22:42:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.347 22:42:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.347 22:42:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.347 22:42:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.347 22:42:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.347 22:42:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.347 22:42:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.347 22:42:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.347 22:42:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.347 22:42:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.347 22:42:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.347 22:42:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.347 22:42:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.347 22:42:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.347 22:42:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.347 22:42:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.347 22:42:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.347 22:42:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.347 22:42:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.347 22:42:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.347 22:42:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.347 22:42:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.347 22:42:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.347 22:42:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.347 22:42:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.347 22:42:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.347 22:42:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.347 22:42:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.347 22:42:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.347 22:42:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.347 22:42:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.347 22:42:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.347 22:42:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.347 22:42:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.347 22:42:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.347 22:42:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.347 22:42:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.347 22:42:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.347 22:42:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.347 22:42:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.347 22:42:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.347 22:42:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.347 22:42:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.347 22:42:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.347 22:42:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.347 22:42:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.347 22:42:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.347 22:42:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.347 22:42:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.347 22:42:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.347 22:42:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.347 22:42:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.347 22:42:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.347 22:42:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.347 22:42:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.347 22:42:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.347 22:42:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.347 22:42:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.347 22:42:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.347 22:42:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.347 22:42:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.347 22:42:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.347 22:42:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.347 22:42:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.347 22:42:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.347 22:42:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.347 22:42:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.347 22:42:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.347 22:42:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.347 22:42:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.347 22:42:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.347 22:42:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.347 22:42:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.347 22:42:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.347 22:42:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.347 22:42:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.347 22:42:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.347 22:42:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.347 22:42:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.347 22:42:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.347 22:42:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.347 22:42:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.347 22:42:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.347 22:42:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.347 22:42:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.347 22:42:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.347 22:42:28 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:03:31.347 22:42:28 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:03:31.347 22:42:28 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:31.347 22:42:28 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:31.347 22:42:28 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:31.347 22:42:28 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:31.347 22:42:28 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:31.347 22:42:28 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:31.347 22:42:28 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:31.347 22:42:28 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:03:31.347 22:42:28 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:03:31.347 22:42:28 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:31.347 22:42:28 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:31.347 22:42:28 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:31.347 22:42:28 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:31.348 22:42:28 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:31.348 22:42:28 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:31.348 22:42:28 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:03:31.348 22:42:28 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:31.348 22:42:28 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:31.348 22:42:28 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:31.348 22:42:28 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:31.348 22:42:28 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:31.348 22:42:28 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:31.348 22:42:28 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:31.348 22:42:28 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:31.348 22:42:28 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:31.348 22:42:28 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:31.348 22:42:28 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:31.348 22:42:28 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:31.348 22:42:28 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:31.348 22:42:28 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:31.348 22:42:28 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:31.348 22:42:28 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:31.348 22:42:28 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:31.348 ************************************ 00:03:31.348 START TEST default_setup 00:03:31.348 ************************************ 00:03:31.348 22:42:28 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1125 -- # default_setup 00:03:31.348 22:42:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:31.348 22:42:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:03:31.348 22:42:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:31.348 22:42:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:03:31.348 22:42:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:31.348 22:42:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:03:31.348 22:42:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:31.348 22:42:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:31.348 22:42:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:31.348 22:42:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:31.348 22:42:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:03:31.348 22:42:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:31.348 22:42:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:31.348 22:42:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:31.348 22:42:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:31.348 22:42:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:31.348 22:42:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:31.348 22:42:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:31.348 22:42:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:03:31.348 22:42:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:03:31.348 22:42:28 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:03:31.348 22:42:28 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:03:33.882 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:33.882 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:33.882 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:33.882 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:33.882 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:33.882 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:33.882 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:33.882 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:33.882 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:33.882 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:33.882 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:33.882 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:33.882 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:33.882 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:33.882 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:34.140 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:36.043 0000:df:00.0 (8086 0a54): nvme -> vfio-pci 00:03:36.043 0000:dd:00.0 (8086 0a54): nvme -> vfio-pci 00:03:36.043 0000:de:00.0 (8086 0953): nvme -> vfio-pci 00:03:36.043 0000:dc:00.0 (8086 0953): nvme -> vfio-pci 00:03:37.951 22:42:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:37.951 22:42:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:03:37.951 22:42:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:03:37.951 22:42:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:03:37.951 22:42:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:03:37.951 22:42:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:03:37.951 22:42:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:03:37.951 22:42:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:37.951 22:42:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:37.951 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:37.951 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:37.951 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:37.951 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:37.952 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:37.952 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:37.952 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:37.952 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:37.952 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:37.952 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.952 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.952 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60294964 kB' 'MemFree: 44675344 kB' 'MemAvailable: 48095084 kB' 'Buffers: 11052 kB' 'Cached: 10787704 kB' 'SwapCached: 0 kB' 'Active: 7915720 kB' 'Inactive: 3426680 kB' 'Active(anon): 7524052 kB' 'Inactive(anon): 0 kB' 'Active(file): 391668 kB' 'Inactive(file): 3426680 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 546488 kB' 'Mapped: 157620 kB' 'Shmem: 6980408 kB' 'KReclaimable: 197088 kB' 'Slab: 599408 kB' 'SReclaimable: 197088 kB' 'SUnreclaim: 402320 kB' 'KernelStack: 18848 kB' 'PageTables: 7800 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487508 kB' 'Committed_AS: 8866580 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 207672 kB' 'VmallocChunk: 0 kB' 'Percpu: 59904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 445396 kB' 'DirectMap2M: 10768384 kB' 'DirectMap1G: 57671680 kB' 00:03:37.952 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.952 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.952 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.952 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.952 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.952 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.952 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.952 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.952 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.952 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.952 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.952 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.952 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.952 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.952 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.952 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.952 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.952 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.952 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.952 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.952 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.952 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.952 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.952 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.952 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.952 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.952 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.952 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.952 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.952 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.952 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.952 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.952 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.952 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.952 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.952 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.952 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.952 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.952 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.952 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.952 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.952 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.952 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.952 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.952 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.952 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.952 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.952 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.952 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.952 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.952 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.952 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.952 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.952 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.952 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.952 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.952 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.952 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.952 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.952 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.952 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.952 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.952 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.952 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.952 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.952 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.952 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.952 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.952 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.952 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.952 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.952 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.952 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.952 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.952 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.952 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.952 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.952 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.952 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.952 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.952 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.952 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.952 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.952 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.952 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.952 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.952 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.952 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.952 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.952 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.952 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.952 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.952 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.952 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.952 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.952 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.952 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.952 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.952 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.953 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.953 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.953 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.953 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.953 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.953 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.953 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.953 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.953 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.953 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.953 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.953 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.953 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.953 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.953 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.953 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.953 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.953 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.953 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.953 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.953 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.953 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.953 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.953 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.953 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.953 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.953 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.953 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.953 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.953 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.953 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.953 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.953 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.953 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.953 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.953 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.953 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.953 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.953 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.953 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.953 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.953 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.953 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.953 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.953 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.953 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.953 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.953 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.953 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.953 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.953 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.953 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.953 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.953 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.953 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.953 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.953 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.953 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.953 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.953 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.953 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.953 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.953 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:37.953 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:37.953 22:42:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:03:37.953 22:42:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:37.953 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:37.953 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:37.953 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:37.953 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:37.953 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:37.953 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:37.953 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:37.953 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:37.953 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:37.953 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60294964 kB' 'MemFree: 44677380 kB' 'MemAvailable: 48097120 kB' 'Buffers: 11052 kB' 'Cached: 10787708 kB' 'SwapCached: 0 kB' 'Active: 7915416 kB' 'Inactive: 3426680 kB' 'Active(anon): 7523748 kB' 'Inactive(anon): 0 kB' 'Active(file): 391668 kB' 'Inactive(file): 3426680 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 547124 kB' 'Mapped: 157464 kB' 'Shmem: 6980412 kB' 'KReclaimable: 197088 kB' 'Slab: 599316 kB' 'SReclaimable: 197088 kB' 'SUnreclaim: 402228 kB' 'KernelStack: 18832 kB' 'PageTables: 8244 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487508 kB' 'Committed_AS: 8866600 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 207688 kB' 'VmallocChunk: 0 kB' 'Percpu: 59904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 445396 kB' 'DirectMap2M: 10768384 kB' 'DirectMap1G: 57671680 kB' 00:03:37.953 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.953 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.953 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.953 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.953 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.953 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.953 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.953 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.953 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.953 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.953 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.953 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.953 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.953 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.953 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.953 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.953 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.953 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.953 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.953 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.953 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.953 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.953 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.953 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.953 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.953 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.953 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.953 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.953 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.953 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.953 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.953 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.954 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.954 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.954 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.954 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.954 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.954 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.954 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.954 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.954 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.954 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.954 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.954 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.954 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.954 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.954 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.954 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.954 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.954 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.954 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.954 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.954 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.954 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.954 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.954 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.954 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.954 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.954 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.954 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.954 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.954 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.954 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.954 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.954 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.954 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.954 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.954 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.954 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.954 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.954 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.954 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.954 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.954 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.954 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.954 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.954 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.954 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.954 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.954 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.954 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.954 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.954 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.954 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.954 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.954 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.954 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.954 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.954 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.954 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.954 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.954 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.954 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.954 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.954 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.954 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.954 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.954 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.954 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.954 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.954 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.954 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.954 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.954 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.954 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.954 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.954 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.954 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.954 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.954 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.954 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.954 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.954 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.954 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.954 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.954 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.954 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.954 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.954 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.954 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.954 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.954 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.954 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.954 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.954 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.954 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.954 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.954 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.954 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.954 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.954 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.954 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.954 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.954 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.954 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.954 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.954 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.954 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.954 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.954 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.954 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.954 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.954 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.954 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.954 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.954 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.954 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.954 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.954 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.954 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.954 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.954 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.955 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.955 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.955 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.955 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.955 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.955 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.955 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.955 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.955 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.955 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.955 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.955 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.955 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.955 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.955 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.955 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.955 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.955 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.955 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.955 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.955 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.955 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.955 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.955 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.955 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.955 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.955 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.955 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.955 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.955 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.955 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.955 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.955 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.955 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.955 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.955 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.955 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.955 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.955 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.955 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.955 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.955 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.955 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.955 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.955 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.955 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.955 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.955 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.955 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.955 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.955 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.955 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.955 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.955 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.955 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.955 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:37.955 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:37.955 22:42:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:03:37.955 22:42:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:37.955 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:37.955 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:37.955 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:37.955 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:37.955 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:37.955 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:37.955 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:37.955 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:37.955 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:37.955 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.955 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.955 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60294964 kB' 'MemFree: 44672680 kB' 'MemAvailable: 48092420 kB' 'Buffers: 11052 kB' 'Cached: 10787724 kB' 'SwapCached: 0 kB' 'Active: 7917120 kB' 'Inactive: 3426680 kB' 'Active(anon): 7525452 kB' 'Inactive(anon): 0 kB' 'Active(file): 391668 kB' 'Inactive(file): 3426680 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 548760 kB' 'Mapped: 157968 kB' 'Shmem: 6980428 kB' 'KReclaimable: 197088 kB' 'Slab: 599316 kB' 'SReclaimable: 197088 kB' 'SUnreclaim: 402228 kB' 'KernelStack: 18912 kB' 'PageTables: 8404 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487508 kB' 'Committed_AS: 8869032 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 207800 kB' 'VmallocChunk: 0 kB' 'Percpu: 59904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 445396 kB' 'DirectMap2M: 10768384 kB' 'DirectMap1G: 57671680 kB' 00:03:37.955 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.955 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.955 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.955 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.955 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.955 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.955 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.955 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.955 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.955 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.955 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.955 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.955 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.955 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.955 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.955 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.955 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.955 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.955 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.955 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.955 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.955 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.955 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.955 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.955 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.955 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.955 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.955 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.955 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.955 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.955 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.955 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.955 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.955 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.955 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.955 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.956 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.956 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.956 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.956 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.956 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.956 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.956 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.956 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.956 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.956 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.956 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.956 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.956 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.956 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.956 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.956 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.956 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.956 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.956 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.956 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.956 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.956 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.956 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.956 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.956 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.956 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.956 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.956 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.956 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.956 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.956 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.956 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.956 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.956 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.956 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.956 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.956 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.956 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.956 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.956 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.956 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.956 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.956 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.956 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.956 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.956 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.956 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.956 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.956 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.956 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.956 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.956 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.956 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.956 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.956 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.956 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.956 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.956 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.956 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.956 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.956 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.956 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.956 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.956 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.956 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.956 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.956 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.956 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.956 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.956 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.956 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.956 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.956 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.956 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.956 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.956 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.956 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.956 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.956 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.956 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.956 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.956 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.956 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.956 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.956 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.956 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.956 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.956 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.956 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.956 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.956 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.956 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.956 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.956 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.956 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.956 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.956 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.957 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.957 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.957 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.957 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.957 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.957 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.957 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.957 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.957 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.957 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.957 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.957 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.957 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.957 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.957 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.957 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.957 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.957 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.957 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.957 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.957 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.957 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.957 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.957 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.957 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.957 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.957 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.957 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.957 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.957 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.957 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.957 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.957 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.957 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.957 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.957 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.957 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.957 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.957 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.957 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.957 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.957 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.957 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.957 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.957 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.957 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.957 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.957 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.957 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.957 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.957 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.957 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.957 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.957 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.957 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.957 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.957 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.957 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.957 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.957 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.957 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.957 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.957 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.957 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.957 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.957 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.957 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.957 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.957 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:37.957 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:37.957 22:42:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:03:37.957 22:42:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:37.957 nr_hugepages=1024 00:03:37.957 22:42:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:37.957 resv_hugepages=0 00:03:37.957 22:42:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:37.957 surplus_hugepages=0 00:03:37.957 22:42:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:37.957 anon_hugepages=0 00:03:37.957 22:42:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:37.957 22:42:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:37.957 22:42:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:37.957 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:37.957 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:37.957 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:37.957 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:37.957 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:37.957 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:37.957 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:37.957 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:37.957 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:37.957 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.957 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.957 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60294964 kB' 'MemFree: 44664664 kB' 'MemAvailable: 48084404 kB' 'Buffers: 11052 kB' 'Cached: 10787748 kB' 'SwapCached: 0 kB' 'Active: 7921416 kB' 'Inactive: 3426680 kB' 'Active(anon): 7529748 kB' 'Inactive(anon): 0 kB' 'Active(file): 391668 kB' 'Inactive(file): 3426680 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 552584 kB' 'Mapped: 158332 kB' 'Shmem: 6980452 kB' 'KReclaimable: 197088 kB' 'Slab: 599316 kB' 'SReclaimable: 197088 kB' 'SUnreclaim: 402228 kB' 'KernelStack: 18960 kB' 'PageTables: 8032 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487508 kB' 'Committed_AS: 8871272 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 207740 kB' 'VmallocChunk: 0 kB' 'Percpu: 59904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 445396 kB' 'DirectMap2M: 10768384 kB' 'DirectMap1G: 57671680 kB' 00:03:37.957 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.957 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.957 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.957 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.957 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.957 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.957 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.957 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.957 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.957 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.957 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.957 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.957 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.958 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.958 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.958 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.958 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.958 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.958 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.958 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.958 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.958 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.958 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.958 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.958 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.958 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.958 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.958 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.958 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.958 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.958 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.958 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.958 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.958 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.958 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.958 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.958 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.958 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.958 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.958 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.958 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.958 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.958 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.958 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.958 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.958 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.958 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.958 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.958 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.958 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.958 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.958 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.958 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.958 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.958 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.958 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.958 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.958 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.958 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.958 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.958 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.958 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.958 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.958 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.958 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.958 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.958 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.958 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.958 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.958 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.958 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.958 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.958 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.958 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.958 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.958 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.958 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.958 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.958 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.958 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.958 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.958 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.958 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.958 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.958 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.958 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.958 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.958 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.958 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.958 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.958 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.958 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.958 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.958 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.958 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.958 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.958 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.958 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.958 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.958 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.958 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.958 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.958 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.958 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.958 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.958 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.958 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.958 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.958 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.958 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.958 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.958 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.958 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.958 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.958 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.958 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.958 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.958 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.958 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.958 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.958 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.958 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.958 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.958 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.958 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.958 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.958 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.958 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.958 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.958 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.958 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.958 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.958 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.959 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.959 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.959 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.959 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.959 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.959 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.959 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.959 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.959 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.959 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.959 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.959 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.959 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.959 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.959 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.959 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.959 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.959 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.959 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.959 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.959 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.959 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.959 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.959 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.959 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.959 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.959 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.959 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.959 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.959 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.959 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.959 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.959 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.959 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.959 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.959 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.959 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.959 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.959 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.959 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.959 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.959 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.959 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.959 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.959 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.959 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.959 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.959 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.959 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.959 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.959 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.959 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.959 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.959 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.959 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.959 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.959 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.959 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.959 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.959 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.959 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:03:37.959 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:37.959 22:42:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:37.959 22:42:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:03:37.959 22:42:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:03:37.959 22:42:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:37.959 22:42:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:37.959 22:42:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:37.959 22:42:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:37.959 22:42:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:37.959 22:42:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:37.959 22:42:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:37.959 22:42:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:37.959 22:42:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:37.959 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:37.959 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:03:37.959 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:37.959 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:37.959 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:37.959 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:37.959 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:37.959 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:37.959 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:37.959 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.959 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.959 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32635652 kB' 'MemFree: 25716408 kB' 'MemUsed: 6919244 kB' 'SwapCached: 0 kB' 'Active: 3116092 kB' 'Inactive: 183336 kB' 'Active(anon): 2880296 kB' 'Inactive(anon): 0 kB' 'Active(file): 235796 kB' 'Inactive(file): 183336 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2998544 kB' 'Mapped: 45156 kB' 'AnonPages: 304072 kB' 'Shmem: 2579412 kB' 'KernelStack: 10712 kB' 'PageTables: 3856 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 88364 kB' 'Slab: 336032 kB' 'SReclaimable: 88364 kB' 'SUnreclaim: 247668 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:37.959 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.959 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.959 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.959 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.959 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.959 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.959 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.959 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.959 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.959 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.959 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.959 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.959 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.959 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.959 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.959 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.959 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.959 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.959 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.959 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.959 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.959 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.959 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.959 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.960 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.960 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.960 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.960 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.960 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.960 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.960 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.960 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.960 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.960 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.960 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.960 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.960 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.960 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.960 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.960 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.960 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.960 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.960 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.960 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.960 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.960 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.960 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.960 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.960 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.960 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.960 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.960 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.960 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.960 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.960 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.960 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.960 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.960 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.960 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.960 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.960 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.960 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.960 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.960 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.960 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.960 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.960 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.960 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.960 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.960 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.960 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.960 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.960 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.960 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.960 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.960 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.960 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.960 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.960 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.960 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.960 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.960 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.960 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.960 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.960 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.960 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.960 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.960 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.960 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.960 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.960 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.960 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.960 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.960 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.960 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.960 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.960 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.960 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.960 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.960 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.960 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.960 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.960 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.960 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.960 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.960 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.960 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.960 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.960 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.960 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.960 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.960 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.960 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.960 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.960 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.960 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.960 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.960 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.960 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.960 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.960 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.960 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.960 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.960 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.960 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.960 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.960 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.960 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.960 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.960 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.960 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.960 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.960 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.960 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.960 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.960 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.960 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.960 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.960 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.960 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.960 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.960 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.960 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.961 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.961 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.961 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:37.961 22:42:35 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:37.961 22:42:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:37.961 22:42:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:37.961 22:42:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:37.961 22:42:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:37.961 22:42:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:37.961 node0=1024 expecting 1024 00:03:37.961 22:42:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:37.961 00:03:37.961 real 0m7.352s 00:03:37.961 user 0m2.050s 00:03:37.961 sys 0m3.282s 00:03:37.961 22:42:35 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:37.961 22:42:35 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:03:37.961 ************************************ 00:03:37.961 END TEST default_setup 00:03:37.961 ************************************ 00:03:37.961 22:42:35 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:37.961 22:42:35 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:37.961 22:42:35 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:37.961 22:42:35 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:37.961 ************************************ 00:03:37.961 START TEST per_node_1G_alloc 00:03:37.961 ************************************ 00:03:37.961 22:42:35 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1125 -- # per_node_1G_alloc 00:03:37.961 22:42:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:03:37.961 22:42:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:03:37.961 22:42:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:37.961 22:42:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:03:37.961 22:42:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:03:37.961 22:42:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:03:37.961 22:42:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:37.961 22:42:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:37.961 22:42:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:37.961 22:42:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:03:37.961 22:42:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:03:37.961 22:42:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:37.961 22:42:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:37.961 22:42:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:37.961 22:42:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:37.961 22:42:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:37.961 22:42:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:03:37.961 22:42:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:37.961 22:42:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:37.961 22:42:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:37.961 22:42:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:37.961 22:42:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:37.961 22:42:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:37.961 22:42:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:03:37.961 22:42:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:03:37.961 22:42:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:37.961 22:42:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:03:41.248 0000:dd:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:41.248 0000:df:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:41.248 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:41.248 0000:de:00.0 (8086 0953): Already using the vfio-pci driver 00:03:41.248 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:41.248 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:41.248 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:41.248 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:41.248 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:41.248 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:41.248 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:41.248 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:41.248 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:41.248 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:41.248 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:41.248 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:41.248 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:41.248 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:41.248 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:41.248 0000:dc:00.0 (8086 0953): Already using the vfio-pci driver 00:03:42.628 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:03:42.628 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:42.628 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:42.628 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:42.628 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:42.628 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:42.628 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:42.628 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:42.628 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:42.628 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:42.628 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:42.628 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:42.628 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:42.628 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:42.628 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:42.628 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:42.628 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:42.628 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:42.628 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:42.628 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.628 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.628 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60294964 kB' 'MemFree: 44666144 kB' 'MemAvailable: 48085884 kB' 'Buffers: 11052 kB' 'Cached: 10787872 kB' 'SwapCached: 0 kB' 'Active: 7912204 kB' 'Inactive: 3426680 kB' 'Active(anon): 7520536 kB' 'Inactive(anon): 0 kB' 'Active(file): 391668 kB' 'Inactive(file): 3426680 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 543420 kB' 'Mapped: 156476 kB' 'Shmem: 6980576 kB' 'KReclaimable: 197088 kB' 'Slab: 599720 kB' 'SReclaimable: 197088 kB' 'SUnreclaim: 402632 kB' 'KernelStack: 18752 kB' 'PageTables: 7592 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487508 kB' 'Committed_AS: 8855284 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 207656 kB' 'VmallocChunk: 0 kB' 'Percpu: 59904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 445396 kB' 'DirectMap2M: 10768384 kB' 'DirectMap1G: 57671680 kB' 00:03:42.628 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.628 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.628 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.628 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.628 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.628 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.628 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.628 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.628 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.628 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.628 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.628 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.628 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.628 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.628 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.628 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.628 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.628 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.628 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.628 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.628 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.628 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.628 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.628 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.628 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.628 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.628 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.628 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.628 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.628 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.628 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.628 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.628 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.628 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.628 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.628 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.628 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.628 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.628 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.628 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.628 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.628 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.628 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.628 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.628 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.628 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.629 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.629 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.629 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.629 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.629 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.629 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.629 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.629 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.629 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.629 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.629 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.629 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.629 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.629 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.629 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.629 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.629 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.629 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.629 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.629 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.629 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.629 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.629 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.629 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.629 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.629 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.629 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.629 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.629 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.629 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.629 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.629 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.629 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.629 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.629 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.629 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.629 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.629 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.629 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.629 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.629 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.629 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.629 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.629 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.629 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.629 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.629 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.629 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.629 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.629 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.629 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.629 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.629 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.629 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.629 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.629 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.629 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.629 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.629 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.629 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.629 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.629 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.629 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.629 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.629 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.629 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.629 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.629 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.629 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.629 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.629 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.629 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.629 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.629 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.629 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.629 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.629 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.629 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.629 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.629 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.629 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.629 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.629 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.629 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.629 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.629 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.629 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.629 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.629 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.629 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.629 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.629 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.629 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.629 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.629 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.629 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.629 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.629 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.629 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.629 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.629 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.629 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.629 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.629 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.629 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.629 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.629 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.629 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.629 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.629 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.629 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.629 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.629 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.629 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.629 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.629 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:42.630 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:42.630 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:42.630 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:42.630 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:42.630 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:42.630 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:42.630 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:42.630 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:42.630 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:42.630 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:42.630 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:42.630 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:42.630 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.630 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.630 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60294964 kB' 'MemFree: 44667596 kB' 'MemAvailable: 48087336 kB' 'Buffers: 11052 kB' 'Cached: 10787876 kB' 'SwapCached: 0 kB' 'Active: 7911876 kB' 'Inactive: 3426680 kB' 'Active(anon): 7520208 kB' 'Inactive(anon): 0 kB' 'Active(file): 391668 kB' 'Inactive(file): 3426680 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 543076 kB' 'Mapped: 156400 kB' 'Shmem: 6980580 kB' 'KReclaimable: 197088 kB' 'Slab: 599676 kB' 'SReclaimable: 197088 kB' 'SUnreclaim: 402588 kB' 'KernelStack: 18736 kB' 'PageTables: 7524 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487508 kB' 'Committed_AS: 8855300 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 207640 kB' 'VmallocChunk: 0 kB' 'Percpu: 59904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 445396 kB' 'DirectMap2M: 10768384 kB' 'DirectMap1G: 57671680 kB' 00:03:42.630 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.630 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.630 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.630 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.630 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.630 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.630 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.630 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.630 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.630 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.630 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.630 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.630 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.630 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.630 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.630 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.630 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.630 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.630 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.630 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.630 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.630 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.630 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.630 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.630 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.630 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.630 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.630 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.630 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.630 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.630 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.630 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.630 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.630 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.630 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.630 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.630 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.630 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.630 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.630 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.630 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.630 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.630 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.630 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.630 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.630 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.630 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.630 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.630 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.630 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.630 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.630 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.630 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.630 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.630 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.630 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.630 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.630 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.630 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.630 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.630 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.630 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.630 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.630 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.630 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.630 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.630 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.630 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.630 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.630 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.630 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.630 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.630 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.630 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.630 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.630 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.630 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.630 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.630 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.630 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.630 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.630 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.630 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.630 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.630 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.630 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.630 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.630 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.631 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.631 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.631 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.631 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.631 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.631 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.631 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.631 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.631 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.631 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.631 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.631 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.631 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.631 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.631 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.631 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.631 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.631 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.631 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.631 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.631 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.631 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.631 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.631 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.631 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.631 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.631 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.631 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.631 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.631 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.631 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.631 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.631 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.631 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.631 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.631 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.631 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.631 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.631 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.631 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.631 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.631 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.631 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.631 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.631 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.631 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.631 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.631 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.631 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.631 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.631 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.631 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.631 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.631 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.631 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.631 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.631 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.631 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.631 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.631 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.631 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.631 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.631 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.631 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.631 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.631 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.631 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.631 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.631 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.631 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.631 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.631 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.631 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.631 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.631 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.631 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.631 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.631 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.631 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.631 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.631 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.631 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.631 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.631 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.631 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.631 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.631 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.631 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.631 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.631 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.631 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.631 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.631 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.631 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.631 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.631 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.631 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.631 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.631 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.631 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.631 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.631 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.631 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.631 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.631 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.631 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.631 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.631 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.631 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.631 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.631 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.631 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.631 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.631 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.631 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.631 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.631 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.631 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:42.632 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:42.632 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:42.632 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:42.632 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:42.632 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:42.632 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:42.632 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:42.632 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:42.632 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:42.632 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:42.632 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:42.632 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:42.632 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.632 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.632 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60294964 kB' 'MemFree: 44667596 kB' 'MemAvailable: 48087336 kB' 'Buffers: 11052 kB' 'Cached: 10787896 kB' 'SwapCached: 0 kB' 'Active: 7911792 kB' 'Inactive: 3426680 kB' 'Active(anon): 7520124 kB' 'Inactive(anon): 0 kB' 'Active(file): 391668 kB' 'Inactive(file): 3426680 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 542960 kB' 'Mapped: 156400 kB' 'Shmem: 6980600 kB' 'KReclaimable: 197088 kB' 'Slab: 599676 kB' 'SReclaimable: 197088 kB' 'SUnreclaim: 402588 kB' 'KernelStack: 18736 kB' 'PageTables: 7524 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487508 kB' 'Committed_AS: 8855324 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 207640 kB' 'VmallocChunk: 0 kB' 'Percpu: 59904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 445396 kB' 'DirectMap2M: 10768384 kB' 'DirectMap1G: 57671680 kB' 00:03:42.632 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.632 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.632 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.632 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.632 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.632 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.632 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.632 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.632 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.632 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.632 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.632 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.632 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.632 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.632 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.632 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.632 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.632 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.632 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.632 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.632 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.632 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.632 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.632 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.632 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.632 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.632 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.632 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.632 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.632 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.632 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.632 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.632 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.632 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.632 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.632 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.632 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.632 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.632 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.632 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.632 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.632 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.632 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.632 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.632 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.632 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.632 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.632 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.632 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.632 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.632 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.632 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.632 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.632 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.632 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.632 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.632 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.632 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.632 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.632 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.632 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.632 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.632 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.632 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.632 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.632 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.632 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.632 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.632 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.632 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.632 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.632 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.633 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.633 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.633 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.633 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.633 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.633 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.633 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.633 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.633 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.633 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.633 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.633 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.633 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.633 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.633 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.633 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.633 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.633 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.633 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.633 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.633 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.633 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.633 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.633 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.633 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.633 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.633 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.633 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.633 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.633 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.633 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.633 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.633 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.633 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.633 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.633 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.633 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.633 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.633 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.633 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.633 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.633 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.633 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.633 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.633 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.633 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.633 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.633 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.633 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.633 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.633 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.633 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.633 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.633 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.633 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.633 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.633 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.633 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.633 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.633 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.633 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.633 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.633 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.633 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.633 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.633 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.633 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.633 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.633 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.633 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.633 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.633 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.633 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.633 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.633 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.633 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.633 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.633 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.633 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.633 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.633 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.633 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.633 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.633 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.633 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.633 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.633 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.633 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.633 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.633 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.633 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.633 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.633 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.633 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.633 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.633 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.633 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.633 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.633 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.633 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.633 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.633 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.633 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.633 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.633 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.633 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.633 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.633 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.633 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.633 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.633 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.633 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.633 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.633 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.633 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.634 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.634 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.634 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.634 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.634 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.634 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.634 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.634 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.634 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.634 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.634 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.634 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.634 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.634 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.634 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:42.634 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:42.634 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:42.634 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:42.634 nr_hugepages=1024 00:03:42.634 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:42.634 resv_hugepages=0 00:03:42.634 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:42.634 surplus_hugepages=0 00:03:42.634 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:42.634 anon_hugepages=0 00:03:42.634 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:42.634 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:42.634 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:42.634 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:42.634 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:42.634 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:42.634 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:42.634 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:42.634 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:42.634 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:42.634 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:42.634 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:42.634 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.634 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60294964 kB' 'MemFree: 44668524 kB' 'MemAvailable: 48088264 kB' 'Buffers: 11052 kB' 'Cached: 10787936 kB' 'SwapCached: 0 kB' 'Active: 7911460 kB' 'Inactive: 3426680 kB' 'Active(anon): 7519792 kB' 'Inactive(anon): 0 kB' 'Active(file): 391668 kB' 'Inactive(file): 3426680 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 542572 kB' 'Mapped: 156400 kB' 'Shmem: 6980640 kB' 'KReclaimable: 197088 kB' 'Slab: 599676 kB' 'SReclaimable: 197088 kB' 'SUnreclaim: 402588 kB' 'KernelStack: 18720 kB' 'PageTables: 7472 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487508 kB' 'Committed_AS: 8855344 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 207640 kB' 'VmallocChunk: 0 kB' 'Percpu: 59904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 445396 kB' 'DirectMap2M: 10768384 kB' 'DirectMap1G: 57671680 kB' 00:03:42.634 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.634 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.634 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.634 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.634 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.634 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.634 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.634 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.634 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.634 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.634 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.634 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.634 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.634 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.634 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.634 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.634 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.634 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.634 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.634 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.634 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.634 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.634 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.634 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.634 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.634 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.634 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.634 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.634 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.634 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.634 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.634 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.634 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.634 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.634 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.634 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.634 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.634 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.634 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.634 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.634 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.634 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.634 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.634 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.634 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.634 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.634 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.634 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.634 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.634 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.634 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.634 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.634 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.634 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.634 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.634 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.634 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.634 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.634 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.634 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.634 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.634 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.634 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.634 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.634 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.634 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.634 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.635 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.635 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.635 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.635 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.635 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.635 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.635 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.635 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.635 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.635 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.635 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.635 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.635 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.635 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.635 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.635 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.635 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.635 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.635 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.635 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.635 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.635 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.635 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.635 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.635 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.635 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.635 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.635 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.635 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.635 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.635 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.635 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.635 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.635 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.635 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.635 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.635 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.635 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.635 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.635 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.635 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.635 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.635 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.635 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.635 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.635 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.635 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.635 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.635 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.635 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.635 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.635 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.635 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.635 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.635 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.635 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.635 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.635 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.635 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.635 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.635 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.635 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.635 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.635 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.635 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.635 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.635 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.635 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.635 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.635 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.635 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.635 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.635 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.635 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.635 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.635 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.635 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.635 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.635 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.635 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.635 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.635 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.635 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.635 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.635 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.635 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.635 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.635 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.635 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.635 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.635 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.635 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.635 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.635 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.635 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.635 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.635 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.635 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.635 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.635 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.635 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.635 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.635 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.635 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.635 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.635 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.635 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.635 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.635 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.635 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.635 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.635 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.635 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.635 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.635 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.635 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.636 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.636 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.636 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.636 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.636 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.636 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.636 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.636 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.636 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.636 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.636 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.636 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:42.636 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:42.636 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:42.636 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:42.636 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:42.636 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:42.636 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:42.636 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:42.636 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:42.636 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:42.636 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:42.636 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:42.636 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:42.636 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:42.636 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:42.636 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:03:42.636 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:42.636 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:42.636 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:42.636 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:42.636 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:42.636 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:42.636 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:42.636 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32635652 kB' 'MemFree: 26758052 kB' 'MemUsed: 5877600 kB' 'SwapCached: 0 kB' 'Active: 3113620 kB' 'Inactive: 183336 kB' 'Active(anon): 2877824 kB' 'Inactive(anon): 0 kB' 'Active(file): 235796 kB' 'Inactive(file): 183336 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2998548 kB' 'Mapped: 44416 kB' 'AnonPages: 301700 kB' 'Shmem: 2579416 kB' 'KernelStack: 10728 kB' 'PageTables: 3892 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 88364 kB' 'Slab: 336556 kB' 'SReclaimable: 88364 kB' 'SUnreclaim: 248192 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:42.636 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.636 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.636 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.636 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.636 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.636 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.636 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.636 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.636 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.636 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.636 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.636 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.636 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.636 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.636 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.636 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.636 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.636 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.636 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.636 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.636 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.636 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.636 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.636 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.636 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.636 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.636 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.636 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.636 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.636 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.636 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.636 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.636 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.636 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.636 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.636 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.636 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.636 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.636 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.636 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.636 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.636 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.636 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.636 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.636 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.636 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.636 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.636 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.636 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.636 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.636 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.636 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.636 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.636 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.636 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.636 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.636 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.636 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.636 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.636 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.636 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.636 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.636 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.636 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.636 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.636 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.636 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.636 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.636 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.636 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.636 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.637 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.637 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.637 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.637 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.637 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.637 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.637 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.637 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.637 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.637 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.637 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.637 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.637 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.637 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.637 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.637 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.637 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.637 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.637 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.637 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.637 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.637 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.637 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.637 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.637 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.637 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.637 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.637 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.637 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.637 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.637 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.637 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.637 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.637 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.637 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.637 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.637 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.637 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.637 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.637 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.637 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.637 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.637 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.637 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.637 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.637 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.637 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.637 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.637 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.637 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.637 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.637 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.637 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.637 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.637 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.637 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.637 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.637 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.637 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.637 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.637 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.637 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.637 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.637 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.637 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.637 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.637 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.637 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.637 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.637 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.637 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.637 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.637 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.637 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.637 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.637 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.637 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:42.637 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:42.637 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:42.637 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:42.637 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:42.637 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:42.637 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:42.637 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:03:42.637 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:42.637 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:42.637 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:42.637 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:42.637 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:42.637 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:42.637 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:42.637 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.637 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.638 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27659312 kB' 'MemFree: 17911956 kB' 'MemUsed: 9747356 kB' 'SwapCached: 0 kB' 'Active: 4798144 kB' 'Inactive: 3243344 kB' 'Active(anon): 4642272 kB' 'Inactive(anon): 0 kB' 'Active(file): 155872 kB' 'Inactive(file): 3243344 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7800464 kB' 'Mapped: 111984 kB' 'AnonPages: 241032 kB' 'Shmem: 4401248 kB' 'KernelStack: 7976 kB' 'PageTables: 3472 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 108724 kB' 'Slab: 263120 kB' 'SReclaimable: 108724 kB' 'SUnreclaim: 154396 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:42.638 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.638 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.638 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.638 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.638 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.638 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.638 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.638 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.638 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.638 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.638 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.638 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.638 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.638 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.638 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.638 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.638 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.638 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.638 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.638 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.638 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.638 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.638 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.638 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.638 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.638 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.638 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.638 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.638 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.638 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.638 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.638 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.638 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.638 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.638 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.638 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.638 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.638 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.638 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.638 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.638 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.638 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.638 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.638 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.638 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.638 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.638 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.638 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.638 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.638 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.638 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.638 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.638 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.638 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.638 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.638 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.638 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.638 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.638 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.638 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.638 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.638 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.638 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.638 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.638 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.638 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.638 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.638 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.638 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.638 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.638 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.638 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.638 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.638 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.638 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.638 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.638 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.638 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.638 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.638 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.638 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.638 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.638 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.638 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.638 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.638 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.638 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.638 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.638 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.638 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.638 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.638 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.638 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.638 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.638 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.638 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.638 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.638 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.638 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.638 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.638 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.638 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.638 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.638 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.638 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.638 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.638 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.638 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.638 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.638 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.638 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.638 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.638 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.638 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.638 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.639 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.639 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.639 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.639 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.639 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.639 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.639 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.639 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.639 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.639 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.639 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.639 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.639 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.639 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.639 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.639 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.639 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.639 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.639 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.639 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.639 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.639 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.639 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.639 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.639 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.639 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.639 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.639 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.639 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.639 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.639 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:42.639 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:42.639 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:42.639 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:42.639 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:42.639 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:42.639 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:42.639 node0=512 expecting 512 00:03:42.639 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:42.639 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:42.639 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:42.639 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:42.639 node1=512 expecting 512 00:03:42.639 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:42.639 00:03:42.639 real 0m4.910s 00:03:42.639 user 0m1.901s 00:03:42.639 sys 0m3.042s 00:03:42.639 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:42.639 22:42:40 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:42.639 ************************************ 00:03:42.639 END TEST per_node_1G_alloc 00:03:42.639 ************************************ 00:03:42.639 22:42:40 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:42.639 22:42:40 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:42.639 22:42:40 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:42.639 22:42:40 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:42.897 ************************************ 00:03:42.897 START TEST even_2G_alloc 00:03:42.897 ************************************ 00:03:42.897 22:42:40 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1125 -- # even_2G_alloc 00:03:42.897 22:42:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:42.897 22:42:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:42.897 22:42:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:42.897 22:42:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:42.897 22:42:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:42.897 22:42:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:42.897 22:42:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:42.897 22:42:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:42.897 22:42:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:42.897 22:42:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:42.897 22:42:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:42.897 22:42:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:42.897 22:42:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:42.897 22:42:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:42.897 22:42:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:42.897 22:42:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:42.897 22:42:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:03:42.897 22:42:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:42.897 22:42:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:42.897 22:42:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:42.897 22:42:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:42.897 22:42:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:42.897 22:42:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:42.897 22:42:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:42.897 22:42:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:42.897 22:42:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:03:42.897 22:42:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:42.897 22:42:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:03:46.184 0000:dd:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:46.184 0000:df:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:46.184 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:46.184 0000:de:00.0 (8086 0953): Already using the vfio-pci driver 00:03:46.184 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:46.184 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:46.184 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:46.184 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:46.184 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:46.184 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:46.184 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:46.184 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:46.184 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:46.184 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:46.184 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:46.184 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:46.184 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:46.184 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:46.184 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:46.184 0000:dc:00.0 (8086 0953): Already using the vfio-pci driver 00:03:47.565 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:47.565 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:47.565 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:47.565 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:47.565 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:47.565 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:47.565 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:47.565 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:47.565 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:47.565 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:47.565 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:47.565 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:47.565 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:47.565 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:47.565 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:47.565 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:47.565 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:47.565 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:47.565 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.565 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.565 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60294964 kB' 'MemFree: 44681668 kB' 'MemAvailable: 48101404 kB' 'Buffers: 11052 kB' 'Cached: 10788064 kB' 'SwapCached: 0 kB' 'Active: 7913020 kB' 'Inactive: 3426680 kB' 'Active(anon): 7521352 kB' 'Inactive(anon): 0 kB' 'Active(file): 391668 kB' 'Inactive(file): 3426680 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 543488 kB' 'Mapped: 156564 kB' 'Shmem: 6980768 kB' 'KReclaimable: 197080 kB' 'Slab: 600200 kB' 'SReclaimable: 197080 kB' 'SUnreclaim: 403120 kB' 'KernelStack: 18752 kB' 'PageTables: 7616 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487508 kB' 'Committed_AS: 8856260 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 207592 kB' 'VmallocChunk: 0 kB' 'Percpu: 59904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 445396 kB' 'DirectMap2M: 10768384 kB' 'DirectMap1G: 57671680 kB' 00:03:47.565 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.565 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.565 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.565 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.565 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.565 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.565 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.565 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.565 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.565 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.565 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.565 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.565 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.565 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.565 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.565 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.565 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.565 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.565 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.565 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.565 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.565 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.565 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.565 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.565 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.565 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.565 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.565 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.565 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.565 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.565 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.565 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.565 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.565 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.565 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.565 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.565 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.565 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.565 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.565 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.565 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.565 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.565 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.565 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.565 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.565 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.565 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.565 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.565 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.565 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.565 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.565 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.565 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.565 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.565 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.565 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.565 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.565 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.565 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.565 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.566 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.566 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.566 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.566 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.566 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.566 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.566 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.566 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.566 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.566 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.566 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.566 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.566 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.566 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.566 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.566 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.566 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.566 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.566 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.566 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.566 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.566 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.566 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.566 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.566 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.566 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.566 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.566 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.566 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.566 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.566 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.566 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.566 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.566 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.566 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.566 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.566 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.566 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.566 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.566 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.566 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.566 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.566 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.566 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.566 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.566 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.566 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.566 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.566 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.566 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.566 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.566 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.566 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.566 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.566 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.566 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.566 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.566 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.566 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.566 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.566 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.566 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.566 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.566 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.566 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.566 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.566 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.566 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.566 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.566 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.566 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.566 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.566 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.566 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.566 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.566 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.566 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.566 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.566 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.566 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.566 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.566 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.566 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.566 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.566 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.566 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.566 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.566 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.566 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.566 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.566 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.566 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.566 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.566 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.566 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.566 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.566 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.566 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.566 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.566 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.566 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.566 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:47.566 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:47.566 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:47.566 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:47.566 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:47.566 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:47.566 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:47.566 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:47.566 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:47.566 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:47.566 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:47.566 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:47.566 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:47.566 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.566 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.567 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60294964 kB' 'MemFree: 44683448 kB' 'MemAvailable: 48103184 kB' 'Buffers: 11052 kB' 'Cached: 10788068 kB' 'SwapCached: 0 kB' 'Active: 7912720 kB' 'Inactive: 3426680 kB' 'Active(anon): 7521052 kB' 'Inactive(anon): 0 kB' 'Active(file): 391668 kB' 'Inactive(file): 3426680 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 543708 kB' 'Mapped: 156444 kB' 'Shmem: 6980772 kB' 'KReclaimable: 197080 kB' 'Slab: 600160 kB' 'SReclaimable: 197080 kB' 'SUnreclaim: 403080 kB' 'KernelStack: 18736 kB' 'PageTables: 7544 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487508 kB' 'Committed_AS: 8856276 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 207560 kB' 'VmallocChunk: 0 kB' 'Percpu: 59904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 445396 kB' 'DirectMap2M: 10768384 kB' 'DirectMap1G: 57671680 kB' 00:03:47.567 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.567 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.567 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.567 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.567 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.567 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.567 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.567 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.567 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.567 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.567 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.567 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.567 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.567 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.567 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.567 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.567 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.567 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.567 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.567 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.567 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.567 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.567 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.567 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.567 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.567 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.567 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.567 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.567 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.567 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.567 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.567 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.567 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.567 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.567 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.567 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.567 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.567 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.567 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.567 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.567 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.567 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.567 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.567 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.567 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.567 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.567 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.567 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.567 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.567 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.567 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.567 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.567 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.567 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.567 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.567 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.567 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.567 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.567 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.567 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.567 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.567 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.567 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.567 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.567 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.567 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.567 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.567 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.567 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.567 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.567 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.567 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.567 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.567 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.567 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.567 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.567 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.567 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.567 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.567 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.567 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.567 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.567 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.567 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.567 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.567 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.567 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.567 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.567 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.567 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.567 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.567 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.567 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.567 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.567 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.567 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.567 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.567 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.567 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.567 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.567 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.567 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.567 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.567 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.567 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.567 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.567 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.567 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.567 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.567 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.568 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.568 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.568 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.568 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.568 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.568 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.568 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.568 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.568 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.568 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.568 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.568 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.568 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.568 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.568 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.568 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.568 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.568 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.568 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.568 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.568 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.568 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.568 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.568 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.568 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.568 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.568 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.568 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.568 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.568 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.568 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.568 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.568 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.568 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.568 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.568 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.568 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.568 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.568 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.568 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.568 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.568 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.568 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.568 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.568 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.568 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.568 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.568 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.568 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.568 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.568 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.568 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.568 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.568 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.568 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.568 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.568 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.568 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.568 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.568 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.568 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.568 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.568 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.568 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.568 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.568 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.568 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.568 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.568 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.568 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.568 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.568 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.568 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.568 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.568 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.568 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.568 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.568 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.568 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.568 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.568 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.568 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.568 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.568 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.568 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.568 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.568 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.568 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.568 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.568 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.568 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.568 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.568 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.568 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.568 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.568 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:47.568 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:47.568 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:47.568 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:47.568 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:47.568 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:47.568 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:47.568 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:47.568 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:47.568 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:47.568 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:47.568 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:47.568 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:47.568 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.568 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.569 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60294964 kB' 'MemFree: 44681684 kB' 'MemAvailable: 48101420 kB' 'Buffers: 11052 kB' 'Cached: 10788088 kB' 'SwapCached: 0 kB' 'Active: 7912796 kB' 'Inactive: 3426680 kB' 'Active(anon): 7521128 kB' 'Inactive(anon): 0 kB' 'Active(file): 391668 kB' 'Inactive(file): 3426680 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 543700 kB' 'Mapped: 156444 kB' 'Shmem: 6980792 kB' 'KReclaimable: 197080 kB' 'Slab: 600160 kB' 'SReclaimable: 197080 kB' 'SUnreclaim: 403080 kB' 'KernelStack: 18736 kB' 'PageTables: 7544 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487508 kB' 'Committed_AS: 8856296 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 207576 kB' 'VmallocChunk: 0 kB' 'Percpu: 59904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 445396 kB' 'DirectMap2M: 10768384 kB' 'DirectMap1G: 57671680 kB' 00:03:47.569 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.569 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.569 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.569 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.569 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.569 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.569 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.569 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.569 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.569 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.569 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.569 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.569 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.569 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.569 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.569 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.569 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.569 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.569 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.569 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.569 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.569 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.569 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.569 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.569 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.569 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.569 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.569 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.569 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.569 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.569 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.569 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.569 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.569 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.569 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.569 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.569 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.569 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.569 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.569 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.569 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.569 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.569 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.569 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.569 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.569 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.569 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.569 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.569 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.569 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.569 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.569 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.569 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.569 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.569 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.569 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.569 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.569 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.569 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.569 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.569 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.569 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.569 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.569 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.569 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.569 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.569 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.569 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.569 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.569 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.569 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.569 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.569 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.569 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.569 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.569 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.569 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.569 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.569 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.569 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.569 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.569 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.569 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.569 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.569 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.569 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.569 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.569 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.569 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.569 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.569 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.569 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.570 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.570 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.570 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.570 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.570 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.570 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.570 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.570 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.570 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.570 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.570 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.570 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.570 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.570 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.570 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.570 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.570 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.570 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.570 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.570 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.570 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.570 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.570 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.570 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.570 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.570 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.570 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.570 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.570 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.570 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.570 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.570 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.570 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.570 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.570 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.570 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.570 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.570 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.570 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.570 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.570 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.570 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.570 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.570 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.570 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.570 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.570 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.570 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.570 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.570 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.570 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.570 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.570 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.570 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.570 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.570 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.570 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.570 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.570 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.570 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.570 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.570 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.570 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.570 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.570 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.570 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.570 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.570 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.570 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.570 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.570 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.570 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.570 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.570 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.570 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.570 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.570 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.570 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.570 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.570 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.570 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.570 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.570 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.570 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.570 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.570 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.570 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.570 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.570 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.570 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.570 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.570 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.570 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.570 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.570 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.570 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.570 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.570 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.570 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.570 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.570 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.570 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.570 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.570 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.570 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.570 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.570 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.570 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.570 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.570 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:47.570 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:47.570 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:47.570 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:47.570 nr_hugepages=1024 00:03:47.570 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:47.570 resv_hugepages=0 00:03:47.570 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:47.570 surplus_hugepages=0 00:03:47.570 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:47.570 anon_hugepages=0 00:03:47.570 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:47.570 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:47.570 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:47.571 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:47.571 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:47.571 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:47.571 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:47.571 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:47.571 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:47.571 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:47.571 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:47.571 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:47.571 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.571 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.571 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60294964 kB' 'MemFree: 44682800 kB' 'MemAvailable: 48102536 kB' 'Buffers: 11052 kB' 'Cached: 10788088 kB' 'SwapCached: 0 kB' 'Active: 7912476 kB' 'Inactive: 3426680 kB' 'Active(anon): 7520808 kB' 'Inactive(anon): 0 kB' 'Active(file): 391668 kB' 'Inactive(file): 3426680 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 543384 kB' 'Mapped: 156444 kB' 'Shmem: 6980792 kB' 'KReclaimable: 197080 kB' 'Slab: 600160 kB' 'SReclaimable: 197080 kB' 'SUnreclaim: 403080 kB' 'KernelStack: 18736 kB' 'PageTables: 7544 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487508 kB' 'Committed_AS: 8856320 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 207576 kB' 'VmallocChunk: 0 kB' 'Percpu: 59904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 445396 kB' 'DirectMap2M: 10768384 kB' 'DirectMap1G: 57671680 kB' 00:03:47.571 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.571 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.571 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.571 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.571 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.571 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.571 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.571 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.571 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.571 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.571 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.571 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.571 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.571 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.571 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.571 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.571 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.571 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.571 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.571 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.571 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.571 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.571 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.571 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.571 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.571 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.571 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.571 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.571 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.571 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.571 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.571 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.571 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.571 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.571 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.571 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.571 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.571 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.571 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.571 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.571 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.571 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.571 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.571 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.571 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.571 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.571 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.571 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.571 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.571 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.571 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.571 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.571 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.571 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.571 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.571 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.571 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.571 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.571 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.571 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.571 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.571 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.571 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.571 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.571 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.571 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.571 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.571 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.571 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.571 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.571 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.571 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.571 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.571 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.571 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.571 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.571 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.571 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.571 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.571 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.571 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.571 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.571 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.571 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.571 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.571 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.571 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.571 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.571 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.571 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.571 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.571 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.571 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.571 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.571 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.571 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.572 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.572 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.572 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.572 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.572 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.572 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.572 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.572 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.572 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.572 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.572 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.572 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.572 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.572 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.572 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.572 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.572 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.572 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.572 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.572 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.572 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.572 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.572 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.572 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.572 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.572 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.572 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.572 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.572 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.572 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.572 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.572 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.572 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.572 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.572 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.572 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.572 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.572 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.572 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.572 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.572 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.572 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.572 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.572 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.572 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.572 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.572 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.572 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.572 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.572 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.572 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.572 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.572 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.572 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.572 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.572 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.572 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.572 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.572 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.572 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.572 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.572 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.572 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.572 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.572 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.572 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.572 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.572 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.572 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.572 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.572 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.572 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.572 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.572 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.572 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.572 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.572 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.572 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.572 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.572 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.572 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.572 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.572 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.572 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.572 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.572 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.572 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.572 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.572 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.572 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.572 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.572 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.572 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.572 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.572 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.572 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.572 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.572 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:47.572 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:47.572 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:47.572 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:47.572 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:47.572 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:47.572 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:47.572 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:47.572 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:47.572 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:47.572 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:47.572 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:47.572 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:47.572 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:47.572 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:47.572 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:03:47.572 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:47.572 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:47.572 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:47.572 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:47.572 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:47.572 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:47.573 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:47.573 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.573 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.573 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32635652 kB' 'MemFree: 26764848 kB' 'MemUsed: 5870804 kB' 'SwapCached: 0 kB' 'Active: 3115228 kB' 'Inactive: 183336 kB' 'Active(anon): 2879432 kB' 'Inactive(anon): 0 kB' 'Active(file): 235796 kB' 'Inactive(file): 183336 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2998600 kB' 'Mapped: 44460 kB' 'AnonPages: 303196 kB' 'Shmem: 2579468 kB' 'KernelStack: 10712 kB' 'PageTables: 3908 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 88356 kB' 'Slab: 336812 kB' 'SReclaimable: 88356 kB' 'SUnreclaim: 248456 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:47.573 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.573 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.573 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.573 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.573 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.573 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.573 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.573 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.573 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.573 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.573 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.573 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.573 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.573 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.573 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.573 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.573 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.573 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.573 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.573 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.573 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.573 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.573 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.573 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.573 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.573 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.573 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.573 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.573 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.573 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.573 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.573 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.573 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.573 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.573 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.573 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.573 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.573 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.573 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.573 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.573 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.573 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.573 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.573 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.573 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.573 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.573 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.573 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.573 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.573 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.573 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.573 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.573 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.573 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.573 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.573 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.573 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.573 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.573 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.573 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.573 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.573 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.573 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.573 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.573 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.573 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.573 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.573 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.573 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.573 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.573 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.573 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.573 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.573 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.573 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.573 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.573 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.573 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.573 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.573 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.573 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.573 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.573 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.573 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.573 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.573 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.573 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.573 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.573 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.573 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.573 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.573 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.573 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.573 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.573 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.573 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.573 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.573 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.573 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.573 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.573 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.573 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.573 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.573 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.573 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.573 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.573 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.573 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.574 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.574 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.574 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.574 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.574 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.574 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.574 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.574 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.574 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.574 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.574 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.574 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.574 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.574 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.574 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.574 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.574 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.574 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.574 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.574 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.574 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.574 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.574 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.574 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.574 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.574 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.574 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.574 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.574 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.574 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.574 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.574 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.574 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.574 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.574 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.574 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.574 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.574 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:47.574 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:47.574 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:47.574 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:47.574 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:47.574 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:47.574 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:47.574 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:03:47.574 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:47.574 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:47.574 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:47.574 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:47.574 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:47.574 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:47.574 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:47.574 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.574 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.574 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27659312 kB' 'MemFree: 17917532 kB' 'MemUsed: 9741780 kB' 'SwapCached: 0 kB' 'Active: 4797988 kB' 'Inactive: 3243344 kB' 'Active(anon): 4642116 kB' 'Inactive(anon): 0 kB' 'Active(file): 155872 kB' 'Inactive(file): 3243344 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7800540 kB' 'Mapped: 111984 kB' 'AnonPages: 240428 kB' 'Shmem: 4401324 kB' 'KernelStack: 8040 kB' 'PageTables: 3688 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 108724 kB' 'Slab: 263348 kB' 'SReclaimable: 108724 kB' 'SUnreclaim: 154624 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:47.574 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.574 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.574 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.574 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.574 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.574 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.574 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.574 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.574 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.574 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.574 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.574 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.574 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.574 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.574 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.574 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.574 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.574 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.574 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.574 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.574 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.574 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.574 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.574 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.574 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.574 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.574 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.574 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.574 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.574 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.574 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.574 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.574 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.574 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.574 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.574 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.574 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.574 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.574 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.574 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.574 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.575 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.575 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.575 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.575 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.575 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.575 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.575 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.575 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.575 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.575 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.575 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.575 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.575 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.575 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.575 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.575 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.575 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.575 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.575 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.575 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.575 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.575 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.575 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.575 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.575 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.575 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.575 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.575 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.575 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.575 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.575 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.575 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.575 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.575 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.575 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.575 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.575 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.575 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.575 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.575 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.575 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.575 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.575 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.575 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.575 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.575 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.575 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.575 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.575 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.575 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.575 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.575 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.575 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.575 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.575 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.575 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.575 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.575 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.575 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.575 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.575 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.575 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.575 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.575 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.575 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.575 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.575 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.575 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.575 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.575 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.575 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.575 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.575 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.575 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.575 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.575 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.575 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.575 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.575 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.575 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.575 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.575 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.575 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.575 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.575 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.575 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.575 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.575 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.575 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.575 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.575 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.575 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.575 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.575 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.575 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.575 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.575 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.575 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.575 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.575 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.575 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.575 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.575 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.575 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.575 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:47.575 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:47.575 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:47.575 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:47.575 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:47.575 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:47.575 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:47.575 node0=512 expecting 512 00:03:47.575 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:47.575 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:47.575 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:47.575 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:47.575 node1=512 expecting 512 00:03:47.575 22:42:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:47.575 00:03:47.575 real 0m4.837s 00:03:47.575 user 0m1.858s 00:03:47.575 sys 0m3.046s 00:03:47.576 22:42:45 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:47.576 22:42:45 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:47.576 ************************************ 00:03:47.576 END TEST even_2G_alloc 00:03:47.576 ************************************ 00:03:47.576 22:42:45 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:47.576 22:42:45 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:47.576 22:42:45 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:47.576 22:42:45 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:47.576 ************************************ 00:03:47.576 START TEST odd_alloc 00:03:47.576 ************************************ 00:03:47.576 22:42:45 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1125 -- # odd_alloc 00:03:47.576 22:42:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:47.576 22:42:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:03:47.576 22:42:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:47.576 22:42:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:47.576 22:42:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:47.576 22:42:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:47.576 22:42:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:47.576 22:42:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:47.576 22:42:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:47.576 22:42:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:47.576 22:42:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:47.576 22:42:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:47.576 22:42:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:47.576 22:42:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:47.576 22:42:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:47.576 22:42:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:47.576 22:42:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:03:47.576 22:42:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:47.576 22:42:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:47.576 22:42:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:03:47.576 22:42:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:47.576 22:42:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:47.576 22:42:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:47.576 22:42:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:47.576 22:42:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:47.576 22:42:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:03:47.576 22:42:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:47.576 22:42:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:03:50.866 0000:dd:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:50.866 0000:df:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:50.866 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:50.866 0000:de:00.0 (8086 0953): Already using the vfio-pci driver 00:03:50.866 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:50.866 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:50.866 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:50.866 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:50.866 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:50.866 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:50.866 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:50.866 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:50.866 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:50.866 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:50.866 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:50.866 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:50.866 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:50.866 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:50.866 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:50.866 0000:dc:00.0 (8086 0953): Already using the vfio-pci driver 00:03:52.242 22:42:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:52.242 22:42:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:03:52.242 22:42:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:52.242 22:42:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:52.242 22:42:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:52.242 22:42:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:52.242 22:42:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:52.242 22:42:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:52.242 22:42:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:52.242 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:52.242 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:52.242 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:52.242 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:52.242 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.242 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:52.242 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:52.242 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.242 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.507 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.507 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.507 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60294964 kB' 'MemFree: 44661084 kB' 'MemAvailable: 48080820 kB' 'Buffers: 11052 kB' 'Cached: 10788244 kB' 'SwapCached: 0 kB' 'Active: 7914944 kB' 'Inactive: 3426680 kB' 'Active(anon): 7523276 kB' 'Inactive(anon): 0 kB' 'Active(file): 391668 kB' 'Inactive(file): 3426680 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 545452 kB' 'Mapped: 156584 kB' 'Shmem: 6980948 kB' 'KReclaimable: 197080 kB' 'Slab: 600028 kB' 'SReclaimable: 197080 kB' 'SUnreclaim: 402948 kB' 'KernelStack: 18752 kB' 'PageTables: 7556 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486484 kB' 'Committed_AS: 8857116 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 207608 kB' 'VmallocChunk: 0 kB' 'Percpu: 59904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 445396 kB' 'DirectMap2M: 10768384 kB' 'DirectMap1G: 57671680 kB' 00:03:52.507 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.507 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.507 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.508 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.508 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.508 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.508 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.508 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.508 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.508 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.508 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.508 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.508 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.508 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.508 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.508 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.508 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.508 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.508 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.508 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.508 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.508 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.508 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.508 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.508 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.508 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.508 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.508 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.508 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.508 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.508 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.508 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.508 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.508 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.508 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.508 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.508 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.508 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.508 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.508 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.508 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.508 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.508 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.508 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.508 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.508 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.508 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.508 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.508 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.508 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.508 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.508 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.508 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.508 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.508 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.508 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.508 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.508 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.508 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.508 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.508 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.508 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.508 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.508 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.508 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.508 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.508 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.508 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.508 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.508 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.508 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.508 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.508 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.508 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.508 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.508 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.508 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.508 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.508 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.508 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.508 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.508 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.508 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.508 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.508 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.508 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.508 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.508 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.508 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.508 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.508 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.508 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.508 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.508 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.508 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.508 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.508 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.508 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.508 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.508 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.508 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.508 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.508 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.508 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.508 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.508 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.508 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.508 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.508 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.508 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.508 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.508 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.508 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.508 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.508 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.508 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.508 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.508 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.508 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.508 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.508 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.508 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.508 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.508 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.508 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.508 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.508 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.508 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.509 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.509 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.509 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.509 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.509 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.509 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.509 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.509 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.509 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.509 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.509 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.509 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.509 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.509 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.509 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.509 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.509 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.509 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.509 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.509 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.509 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.509 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.509 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.509 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.509 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.509 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.509 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.509 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.509 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.509 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.509 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.509 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.509 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.509 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:52.509 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:52.509 22:42:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:52.509 22:42:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:52.509 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:52.509 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:52.509 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:52.509 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:52.509 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.509 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:52.509 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:52.509 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.509 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.509 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.509 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.509 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60294964 kB' 'MemFree: 44662220 kB' 'MemAvailable: 48081956 kB' 'Buffers: 11052 kB' 'Cached: 10788248 kB' 'SwapCached: 0 kB' 'Active: 7914684 kB' 'Inactive: 3426680 kB' 'Active(anon): 7523016 kB' 'Inactive(anon): 0 kB' 'Active(file): 391668 kB' 'Inactive(file): 3426680 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 545732 kB' 'Mapped: 156500 kB' 'Shmem: 6980952 kB' 'KReclaimable: 197080 kB' 'Slab: 600012 kB' 'SReclaimable: 197080 kB' 'SUnreclaim: 402932 kB' 'KernelStack: 18736 kB' 'PageTables: 7548 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486484 kB' 'Committed_AS: 8860124 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 207608 kB' 'VmallocChunk: 0 kB' 'Percpu: 59904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 445396 kB' 'DirectMap2M: 10768384 kB' 'DirectMap1G: 57671680 kB' 00:03:52.509 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.509 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.509 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.509 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.509 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.509 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.509 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.509 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.509 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.509 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.509 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.509 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.509 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.509 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.509 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.509 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.509 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.509 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.509 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.509 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.509 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.509 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.509 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.509 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.509 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.509 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.509 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.509 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.509 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.509 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.509 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.509 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.509 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.509 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.509 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.509 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.509 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.509 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.509 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.509 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.509 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.509 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.509 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.509 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.509 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.509 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.509 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.509 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.509 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.509 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.509 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.509 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.509 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.509 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.509 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.509 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.509 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.509 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.509 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.509 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.509 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.509 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.509 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.510 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.510 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.510 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.510 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.510 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.510 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.510 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.510 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.510 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.510 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.510 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.510 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.510 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.510 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.510 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.510 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.510 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.510 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.510 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.510 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.510 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.510 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.510 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.510 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.510 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.510 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.510 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.510 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.510 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.510 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.510 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.510 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.510 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.510 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.510 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.510 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.510 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.510 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.510 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.510 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.510 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.510 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.510 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.510 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.510 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.510 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.510 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.510 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.510 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.510 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.510 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.510 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.510 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.510 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.510 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.510 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.510 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.510 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.510 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.510 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.510 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.510 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.510 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.510 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.510 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.510 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.510 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.510 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.510 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.510 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.510 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.510 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.510 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.510 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.510 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.510 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.510 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.510 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.510 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.510 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.510 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.510 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.510 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.510 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.510 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.510 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.510 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.510 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.510 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.510 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.510 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.510 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.510 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.510 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.510 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.510 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.510 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.510 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.510 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.510 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.510 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.510 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.510 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.510 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.510 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.510 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.510 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.510 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.510 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.510 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.510 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.510 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.510 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.510 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.510 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.510 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.510 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.510 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.510 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.510 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.510 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.510 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.510 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.510 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.510 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.510 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.511 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.511 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.511 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.511 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.511 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.511 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.511 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.511 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.511 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.511 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.511 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.511 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.511 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.511 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.511 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.511 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.511 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:52.511 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:52.511 22:42:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:52.511 22:42:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:52.511 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:52.511 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:52.511 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:52.511 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:52.511 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.511 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:52.511 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:52.511 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.511 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.511 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.511 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.511 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60294964 kB' 'MemFree: 44662536 kB' 'MemAvailable: 48082272 kB' 'Buffers: 11052 kB' 'Cached: 10788264 kB' 'SwapCached: 0 kB' 'Active: 7914644 kB' 'Inactive: 3426680 kB' 'Active(anon): 7522976 kB' 'Inactive(anon): 0 kB' 'Active(file): 391668 kB' 'Inactive(file): 3426680 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 545616 kB' 'Mapped: 156500 kB' 'Shmem: 6980968 kB' 'KReclaimable: 197080 kB' 'Slab: 600004 kB' 'SReclaimable: 197080 kB' 'SUnreclaim: 402924 kB' 'KernelStack: 18720 kB' 'PageTables: 7484 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486484 kB' 'Committed_AS: 8857152 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 207576 kB' 'VmallocChunk: 0 kB' 'Percpu: 59904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 445396 kB' 'DirectMap2M: 10768384 kB' 'DirectMap1G: 57671680 kB' 00:03:52.511 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.511 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.511 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.511 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.511 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.511 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.511 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.511 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.511 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.511 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.511 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.511 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.511 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.511 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.511 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.511 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.511 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.511 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.511 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.511 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.511 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.511 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.511 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.511 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.511 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.511 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.511 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.511 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.511 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.511 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.511 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.511 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.511 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.511 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.511 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.511 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.511 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.511 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.511 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.511 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.511 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.511 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.511 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.511 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.511 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.511 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.511 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.511 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.511 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.511 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.511 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.511 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.511 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.511 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.511 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.511 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.511 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.511 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.511 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.511 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.511 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.511 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.511 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.511 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.511 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.512 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.512 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.512 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.512 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.512 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.512 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.512 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.512 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.512 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.512 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.512 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.512 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.512 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.512 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.512 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.512 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.512 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.512 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.512 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.512 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.512 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.512 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.512 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.512 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.512 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.512 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.512 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.512 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.512 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.512 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.512 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.512 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.512 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.512 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.512 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.512 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.512 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.512 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.512 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.512 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.512 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.512 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.512 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.512 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.512 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.512 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.512 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.512 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.512 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.512 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.512 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.512 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.512 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.512 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.512 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.512 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.512 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.512 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.512 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.512 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.512 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.512 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.512 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.512 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.512 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.512 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.512 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.512 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.512 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.512 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.512 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.512 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.512 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.512 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.512 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.512 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.512 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.512 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.512 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.512 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.512 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.512 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.512 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.512 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.512 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.512 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.512 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.512 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.512 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.512 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.512 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.512 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.512 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.512 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.512 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.512 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.512 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.512 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.512 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.512 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.512 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.512 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.513 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.513 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.513 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.513 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.513 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.513 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.513 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.513 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.513 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.513 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.513 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.513 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.513 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.513 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.513 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.513 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.513 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.513 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.513 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.513 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.513 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.513 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.513 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.513 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.513 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.513 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.513 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.513 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.513 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.513 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.513 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.513 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.513 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.513 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.513 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:52.513 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:52.513 22:42:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:52.513 22:42:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:52.513 nr_hugepages=1025 00:03:52.513 22:42:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:52.513 resv_hugepages=0 00:03:52.513 22:42:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:52.513 surplus_hugepages=0 00:03:52.513 22:42:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:52.513 anon_hugepages=0 00:03:52.513 22:42:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:52.513 22:42:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:52.513 22:42:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:52.513 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:52.513 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:52.513 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:52.513 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:52.513 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.513 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:52.513 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:52.513 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.513 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.513 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.513 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.513 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60294964 kB' 'MemFree: 44662180 kB' 'MemAvailable: 48081916 kB' 'Buffers: 11052 kB' 'Cached: 10788288 kB' 'SwapCached: 0 kB' 'Active: 7914984 kB' 'Inactive: 3426680 kB' 'Active(anon): 7523316 kB' 'Inactive(anon): 0 kB' 'Active(file): 391668 kB' 'Inactive(file): 3426680 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 545892 kB' 'Mapped: 156500 kB' 'Shmem: 6980992 kB' 'KReclaimable: 197080 kB' 'Slab: 600004 kB' 'SReclaimable: 197080 kB' 'SUnreclaim: 402924 kB' 'KernelStack: 18736 kB' 'PageTables: 7548 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486484 kB' 'Committed_AS: 8856820 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 207592 kB' 'VmallocChunk: 0 kB' 'Percpu: 59904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 445396 kB' 'DirectMap2M: 10768384 kB' 'DirectMap1G: 57671680 kB' 00:03:52.513 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.513 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.513 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.513 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.513 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.513 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.513 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.513 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.513 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.513 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.513 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.513 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.513 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.513 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.513 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.513 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.513 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.513 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.513 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.513 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.513 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.513 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.513 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.513 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.513 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.513 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.513 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.513 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.513 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.513 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.513 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.513 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.513 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.513 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.513 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.513 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.513 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.513 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.513 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.513 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.514 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.514 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.514 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.514 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.514 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.514 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.514 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.514 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.514 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.514 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.514 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.514 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.514 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.514 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.514 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.514 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.514 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.514 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.514 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.514 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.514 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.514 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.514 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.514 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.514 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.514 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.514 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.514 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.514 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.514 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.514 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.514 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.514 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.514 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.514 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.514 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.514 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.514 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.514 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.514 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.514 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.514 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.514 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.514 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.514 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.514 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.514 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.514 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.514 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.514 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.514 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.514 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.514 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.514 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.514 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.514 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.514 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.514 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.514 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.514 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.514 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.514 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.514 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.514 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.514 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.514 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.514 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.514 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.514 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.514 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.514 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.514 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.514 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.514 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.514 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.514 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.514 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.514 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.514 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.514 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.514 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.514 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.514 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.514 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.514 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.514 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.514 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.514 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.514 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.514 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.514 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.514 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.514 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.514 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.514 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.514 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.514 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.514 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.514 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.514 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.514 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.514 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.514 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.514 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.514 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.514 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.514 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.514 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.514 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.514 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.514 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.514 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.514 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.514 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.514 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.514 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.514 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.514 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.514 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.514 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.514 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.514 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.514 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.514 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.515 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.515 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.515 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.515 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.515 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.515 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.515 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.515 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.515 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.515 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.515 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.515 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.515 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.515 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.515 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.515 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.515 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.515 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.515 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.515 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.515 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.515 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.515 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.515 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.515 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.515 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.515 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.515 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.515 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.515 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:03:52.515 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:52.515 22:42:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:52.515 22:42:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:52.515 22:42:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:03:52.515 22:42:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:52.515 22:42:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:52.515 22:42:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:52.515 22:42:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:03:52.515 22:42:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:52.515 22:42:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:52.515 22:42:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:52.515 22:42:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:52.515 22:42:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:52.515 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:52.515 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:03:52.515 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:52.515 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:52.515 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.515 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:52.515 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:52.515 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.515 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.515 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.515 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.515 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32635652 kB' 'MemFree: 26756784 kB' 'MemUsed: 5878868 kB' 'SwapCached: 0 kB' 'Active: 3115184 kB' 'Inactive: 183336 kB' 'Active(anon): 2879388 kB' 'Inactive(anon): 0 kB' 'Active(file): 235796 kB' 'Inactive(file): 183336 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2998600 kB' 'Mapped: 44516 kB' 'AnonPages: 303272 kB' 'Shmem: 2579468 kB' 'KernelStack: 10696 kB' 'PageTables: 3760 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 88356 kB' 'Slab: 336096 kB' 'SReclaimable: 88356 kB' 'SUnreclaim: 247740 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:52.515 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.515 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.515 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.515 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.515 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.515 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.515 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.515 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.515 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.515 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.515 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.515 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.515 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.515 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.515 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.515 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.515 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.515 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.515 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.515 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.515 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.515 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.515 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.515 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.515 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.515 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.515 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.515 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.515 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.515 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.515 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.515 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.515 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.515 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.515 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.515 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.515 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.515 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.515 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.515 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.515 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.515 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.515 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.515 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.515 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.515 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.515 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.515 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.516 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.516 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.516 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.516 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.516 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.516 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.516 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.516 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.516 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.516 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.516 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.516 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.516 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.516 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.516 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.516 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.516 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.516 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.516 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.516 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.516 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.516 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.516 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.516 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.516 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.516 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.516 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.516 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.516 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.516 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.516 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.516 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.516 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.516 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.516 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.516 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.516 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.516 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.516 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.516 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.516 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.516 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.516 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.516 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.516 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.516 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.516 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.516 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.516 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.516 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.516 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.516 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.516 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.516 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.516 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.516 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.516 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.516 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.516 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.516 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.516 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.516 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.516 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.516 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.516 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.516 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.516 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.516 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.516 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.516 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.516 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.516 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.516 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.516 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.516 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.516 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.516 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.516 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.516 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.516 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.516 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.516 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.516 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.516 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.516 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.516 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.516 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.516 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.516 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.516 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.516 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.516 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.516 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.516 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.516 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.516 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.516 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.516 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:52.516 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:52.516 22:42:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:52.516 22:42:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:52.516 22:42:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:52.516 22:42:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:52.516 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:52.516 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:03:52.516 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:52.516 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:52.516 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.516 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:52.516 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:52.516 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.516 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.516 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.516 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.517 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27659312 kB' 'MemFree: 17904580 kB' 'MemUsed: 9754732 kB' 'SwapCached: 0 kB' 'Active: 4800168 kB' 'Inactive: 3243344 kB' 'Active(anon): 4644296 kB' 'Inactive(anon): 0 kB' 'Active(file): 155872 kB' 'Inactive(file): 3243344 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7800780 kB' 'Mapped: 111984 kB' 'AnonPages: 242960 kB' 'Shmem: 4401564 kB' 'KernelStack: 8024 kB' 'PageTables: 3748 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 108724 kB' 'Slab: 263908 kB' 'SReclaimable: 108724 kB' 'SUnreclaim: 155184 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:03:52.517 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.517 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.517 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.517 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.517 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.517 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.517 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.517 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.517 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.517 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.517 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.517 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.517 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.517 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.517 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.517 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.517 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.517 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.517 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.517 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.517 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.517 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.517 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.517 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.517 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.517 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.517 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.517 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.517 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.517 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.517 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.517 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.517 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.517 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.517 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.517 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.517 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.517 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.517 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.517 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.517 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.517 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.517 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.517 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.517 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.517 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.517 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.517 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.517 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.517 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.517 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.517 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.517 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.517 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.517 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.517 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.517 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.517 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.517 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.517 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.517 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.517 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.517 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.517 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.517 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.517 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.517 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.517 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.517 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.517 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.517 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.517 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.517 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.517 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.517 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.517 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.517 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.517 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.517 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.517 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.517 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.517 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.517 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.517 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.517 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.517 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.517 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.517 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.517 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.517 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.517 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.517 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.517 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.517 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.517 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.517 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.518 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.518 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.518 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.518 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.518 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.518 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.518 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.518 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.518 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.518 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.518 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.518 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.518 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.518 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.518 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.518 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.518 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.518 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.518 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.518 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.518 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.518 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.518 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.518 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.518 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.518 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.518 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.518 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.518 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.518 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.518 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.518 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.518 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.518 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.518 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.518 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.518 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.518 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.518 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.518 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.518 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.518 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.518 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.518 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.518 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.518 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.518 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.518 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.518 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.518 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:52.518 22:42:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:52.518 22:42:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:52.518 22:42:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:52.518 22:42:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:52.518 22:42:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:52.518 22:42:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:03:52.518 node0=512 expecting 513 00:03:52.518 22:42:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:52.518 22:42:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:52.518 22:42:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:52.518 22:42:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:03:52.518 node1=513 expecting 512 00:03:52.518 22:42:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:03:52.518 00:03:52.518 real 0m4.849s 00:03:52.518 user 0m1.924s 00:03:52.518 sys 0m3.001s 00:03:52.518 22:42:50 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:52.518 22:42:50 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:52.518 ************************************ 00:03:52.518 END TEST odd_alloc 00:03:52.518 ************************************ 00:03:52.518 22:42:50 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:52.518 22:42:50 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:52.518 22:42:50 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:52.518 22:42:50 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:52.518 ************************************ 00:03:52.518 START TEST custom_alloc 00:03:52.518 ************************************ 00:03:52.518 22:42:50 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1125 -- # custom_alloc 00:03:52.518 22:42:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:03:52.518 22:42:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:03:52.518 22:42:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:52.518 22:42:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:52.518 22:42:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:52.518 22:42:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:52.518 22:42:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:52.518 22:42:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:52.518 22:42:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:52.518 22:42:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:52.518 22:42:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:52.518 22:42:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:52.518 22:42:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:52.518 22:42:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:52.518 22:42:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:52.518 22:42:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:52.518 22:42:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:52.518 22:42:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:52.518 22:42:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:52.518 22:42:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:52.518 22:42:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:52.518 22:42:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:03:52.518 22:42:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:52.518 22:42:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:52.518 22:42:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:52.518 22:42:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:52.518 22:42:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:52.518 22:42:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:52.518 22:42:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:52.518 22:42:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:03:52.518 22:42:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:03:52.518 22:42:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:52.518 22:42:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:52.518 22:42:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:52.518 22:42:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:52.518 22:42:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:52.518 22:42:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:52.518 22:42:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:52.518 22:42:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:52.519 22:42:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:52.519 22:42:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:52.519 22:42:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:52.519 22:42:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:52.519 22:42:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:52.519 22:42:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:52.519 22:42:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:52.519 22:42:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:52.519 22:42:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:03:52.519 22:42:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:52.519 22:42:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:52.519 22:42:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:52.519 22:42:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:52.519 22:42:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:52.519 22:42:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:52.519 22:42:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:52.519 22:42:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:52.519 22:42:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:52.519 22:42:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:52.519 22:42:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:52.519 22:42:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:52.519 22:42:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:52.519 22:42:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:52.519 22:42:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:03:52.519 22:42:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:52.519 22:42:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:52.519 22:42:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:52.519 22:42:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:03:52.519 22:42:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:52.519 22:42:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:03:52.519 22:42:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:03:52.519 22:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:52.519 22:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:03:55.809 0000:dd:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:55.809 0000:df:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:55.809 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:55.809 0000:de:00.0 (8086 0953): Already using the vfio-pci driver 00:03:55.809 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:55.809 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:55.809 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:55.809 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:55.809 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:55.809 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:55.809 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:55.809 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:55.809 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:55.809 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:55.809 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:55.809 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:55.809 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:55.809 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:55.809 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:55.809 0000:dc:00.0 (8086 0953): Already using the vfio-pci driver 00:03:57.183 22:42:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:03:57.183 22:42:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:03:57.183 22:42:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:03:57.183 22:42:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:57.183 22:42:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:57.183 22:42:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:57.183 22:42:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:57.183 22:42:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:57.183 22:42:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:57.183 22:42:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:57.183 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:57.183 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:57.183 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:57.183 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:57.183 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:57.183 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:57.183 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:57.183 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:57.183 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:57.183 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.183 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.183 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60294964 kB' 'MemFree: 43612724 kB' 'MemAvailable: 47032396 kB' 'Buffers: 11052 kB' 'Cached: 10788428 kB' 'SwapCached: 0 kB' 'Active: 7915648 kB' 'Inactive: 3426680 kB' 'Active(anon): 7523980 kB' 'Inactive(anon): 0 kB' 'Active(file): 391668 kB' 'Inactive(file): 3426680 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 546264 kB' 'Mapped: 156532 kB' 'Shmem: 6981132 kB' 'KReclaimable: 196952 kB' 'Slab: 599220 kB' 'SReclaimable: 196952 kB' 'SUnreclaim: 402268 kB' 'KernelStack: 18768 kB' 'PageTables: 7628 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963220 kB' 'Committed_AS: 8857448 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 207752 kB' 'VmallocChunk: 0 kB' 'Percpu: 59904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 445396 kB' 'DirectMap2M: 10768384 kB' 'DirectMap1G: 57671680 kB' 00:03:57.183 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.183 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.183 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.183 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.183 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.183 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.183 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.183 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.183 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.183 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.183 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.183 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.183 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.183 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.183 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.183 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.183 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.183 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.183 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.183 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.183 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.183 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.183 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.183 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.183 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.183 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.183 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.183 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.183 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.183 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.183 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.183 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.183 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.183 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.183 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.183 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.183 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.183 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.183 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.183 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.183 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.183 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.183 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.183 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.183 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.183 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.183 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.183 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.183 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.183 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.183 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.183 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.183 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.183 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.183 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.183 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.183 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.183 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.183 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.183 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.183 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.183 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.183 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.183 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.183 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.183 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.183 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.183 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.183 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.183 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.183 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.183 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.183 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.183 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.183 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.183 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.183 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.183 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.183 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.184 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.184 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.184 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.184 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.184 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.184 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.184 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.184 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.184 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.184 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.184 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.184 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.184 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.184 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.184 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.184 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.184 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.184 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.184 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.184 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.184 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.184 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.184 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.184 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.184 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.184 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.184 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.184 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.447 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.447 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.447 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.447 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.447 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.447 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.447 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.447 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.447 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.447 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.447 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.447 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.447 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.447 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.447 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.447 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.447 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.448 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.448 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.448 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.448 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.448 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.448 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.448 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.448 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.448 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.448 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.448 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.448 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.448 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.448 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.448 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.448 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.448 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.448 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.448 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.448 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.448 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.448 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.448 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.448 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.448 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.448 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.448 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.448 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.448 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.448 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.448 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.448 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.448 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.448 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.448 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.448 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.448 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.448 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:57.448 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:57.448 22:42:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:57.448 22:42:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:57.448 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:57.448 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:57.448 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:57.448 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:57.448 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:57.448 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:57.448 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:57.448 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:57.448 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:57.448 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.448 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.448 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60294964 kB' 'MemFree: 43613724 kB' 'MemAvailable: 47033396 kB' 'Buffers: 11052 kB' 'Cached: 10788428 kB' 'SwapCached: 0 kB' 'Active: 7915264 kB' 'Inactive: 3426680 kB' 'Active(anon): 7523596 kB' 'Inactive(anon): 0 kB' 'Active(file): 391668 kB' 'Inactive(file): 3426680 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 545888 kB' 'Mapped: 156452 kB' 'Shmem: 6981132 kB' 'KReclaimable: 196952 kB' 'Slab: 599328 kB' 'SReclaimable: 196952 kB' 'SUnreclaim: 402376 kB' 'KernelStack: 18720 kB' 'PageTables: 7496 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963220 kB' 'Committed_AS: 8857464 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 207672 kB' 'VmallocChunk: 0 kB' 'Percpu: 59904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 445396 kB' 'DirectMap2M: 10768384 kB' 'DirectMap1G: 57671680 kB' 00:03:57.448 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.448 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.448 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.448 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.448 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.448 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.448 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.448 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.448 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.448 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.448 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.448 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.448 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.448 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.448 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.448 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.448 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.448 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.448 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.448 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.448 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.448 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.448 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.448 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.448 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.448 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.448 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.448 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.448 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.448 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.448 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.448 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.448 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.448 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.448 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.448 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.448 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.448 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.448 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.448 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.448 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.448 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.448 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.448 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.448 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.448 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.448 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.448 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.448 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.448 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.448 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.448 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.449 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.449 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.449 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.449 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.449 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.449 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.449 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.449 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.449 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.449 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.449 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.449 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.449 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.449 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.449 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.449 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.449 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.449 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.449 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.449 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.449 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.449 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.449 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.449 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.449 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.449 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.449 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.449 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.449 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.449 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.449 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.449 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.449 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.449 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.449 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.449 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.449 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.449 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.449 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.449 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.449 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.449 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.449 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.449 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.449 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.449 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.449 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.449 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.449 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.449 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.449 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.449 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.449 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.449 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.449 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.449 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.449 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.449 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.449 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.449 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.449 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.449 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.449 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.449 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.449 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.449 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.449 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.449 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.449 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.449 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.449 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.449 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.449 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.449 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.449 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.449 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.449 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.449 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.449 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.449 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.449 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.449 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.449 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.449 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.449 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.449 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.449 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.449 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.449 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.449 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.449 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.449 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.449 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.449 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.449 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.449 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.449 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.449 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.449 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.449 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.449 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.449 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.449 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.449 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.449 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.449 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.449 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.449 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.449 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.449 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.449 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.449 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.449 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.449 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.449 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.449 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.450 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.450 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.450 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.450 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.450 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.450 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.450 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.450 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.450 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.450 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.450 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.450 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.450 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.450 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.450 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.450 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.450 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.450 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.450 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.450 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.450 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.450 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.450 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.450 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.450 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.450 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.450 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.450 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.450 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.450 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.450 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.450 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.450 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.450 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.450 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.450 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.450 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.450 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:57.450 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:57.450 22:42:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:57.450 22:42:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:57.450 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:57.450 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:57.450 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:57.450 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:57.450 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:57.450 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:57.450 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:57.450 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:57.450 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:57.450 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.450 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.450 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60294964 kB' 'MemFree: 43613724 kB' 'MemAvailable: 47033396 kB' 'Buffers: 11052 kB' 'Cached: 10788428 kB' 'SwapCached: 0 kB' 'Active: 7915768 kB' 'Inactive: 3426680 kB' 'Active(anon): 7524100 kB' 'Inactive(anon): 0 kB' 'Active(file): 391668 kB' 'Inactive(file): 3426680 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 546392 kB' 'Mapped: 156452 kB' 'Shmem: 6981132 kB' 'KReclaimable: 196952 kB' 'Slab: 599328 kB' 'SReclaimable: 196952 kB' 'SUnreclaim: 402376 kB' 'KernelStack: 18720 kB' 'PageTables: 7496 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963220 kB' 'Committed_AS: 8857616 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 207672 kB' 'VmallocChunk: 0 kB' 'Percpu: 59904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 445396 kB' 'DirectMap2M: 10768384 kB' 'DirectMap1G: 57671680 kB' 00:03:57.450 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.450 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.450 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.450 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.450 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.450 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.450 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.450 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.450 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.450 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.450 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.450 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.450 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.450 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.450 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.450 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.450 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.450 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.450 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.450 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.450 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.450 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.450 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.450 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.450 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.450 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.450 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.450 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.450 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.450 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.450 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.450 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.450 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.450 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.450 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.450 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.450 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.450 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.450 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.450 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.450 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.450 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.450 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.450 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.450 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.450 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.450 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.450 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.450 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.450 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.450 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.450 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.450 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.451 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.451 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.451 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.451 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.451 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.451 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.451 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.451 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.451 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.451 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.451 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.451 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.451 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.451 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.451 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.451 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.451 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.451 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.451 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.451 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.451 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.451 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.451 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.451 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.451 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.451 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.451 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.451 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.451 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.451 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.451 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.451 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.451 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.451 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.451 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.451 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.451 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.451 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.451 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.451 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.451 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.451 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.451 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.451 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.451 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.451 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.451 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.451 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.451 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.451 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.451 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.451 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.451 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.451 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.451 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.451 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.451 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.451 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.451 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.451 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.451 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.451 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.451 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.451 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.451 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.451 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.451 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.451 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.451 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.451 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.451 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.451 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.451 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.451 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.451 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.451 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.451 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.451 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.451 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.451 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.451 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.451 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.451 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.451 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.451 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.451 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.451 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.451 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.451 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.451 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.451 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.451 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.451 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.451 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.451 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.451 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.451 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.451 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.452 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.452 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.452 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.452 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.452 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.452 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.452 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.452 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.452 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.452 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.452 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.452 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.452 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.452 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.452 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.452 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.452 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.452 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.452 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.452 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.452 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.452 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.452 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.452 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.452 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.452 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.452 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.452 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.452 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.452 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.452 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.452 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.452 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.452 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.452 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.452 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.452 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.452 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.452 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.452 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.452 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.452 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.452 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.452 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.452 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.452 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.452 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.452 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.452 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.452 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.452 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:57.452 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:57.452 22:42:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:57.452 22:42:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:03:57.452 nr_hugepages=1536 00:03:57.452 22:42:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:57.452 resv_hugepages=0 00:03:57.452 22:42:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:57.452 surplus_hugepages=0 00:03:57.452 22:42:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:57.452 anon_hugepages=0 00:03:57.452 22:42:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:57.452 22:42:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:03:57.452 22:42:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:57.452 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:57.452 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:57.452 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:57.452 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:57.452 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:57.452 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:57.452 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:57.452 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:57.452 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:57.452 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.452 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.452 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60294964 kB' 'MemFree: 43613928 kB' 'MemAvailable: 47033600 kB' 'Buffers: 11052 kB' 'Cached: 10788472 kB' 'SwapCached: 0 kB' 'Active: 7915448 kB' 'Inactive: 3426680 kB' 'Active(anon): 7523780 kB' 'Inactive(anon): 0 kB' 'Active(file): 391668 kB' 'Inactive(file): 3426680 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 546024 kB' 'Mapped: 156504 kB' 'Shmem: 6981176 kB' 'KReclaimable: 196952 kB' 'Slab: 599328 kB' 'SReclaimable: 196952 kB' 'SUnreclaim: 402376 kB' 'KernelStack: 18720 kB' 'PageTables: 7480 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963220 kB' 'Committed_AS: 8857644 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 207672 kB' 'VmallocChunk: 0 kB' 'Percpu: 59904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 445396 kB' 'DirectMap2M: 10768384 kB' 'DirectMap1G: 57671680 kB' 00:03:57.452 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.452 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.452 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.452 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.452 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.452 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.452 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.452 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.452 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.452 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.452 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.452 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.452 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.452 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.453 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.453 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.453 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.453 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.453 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.453 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.453 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.453 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.453 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.453 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.453 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.453 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.453 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.453 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.453 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.453 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.453 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.453 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.453 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.453 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.453 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.453 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.453 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.453 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.453 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.453 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.453 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.453 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.453 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.453 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.453 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.453 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.453 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.453 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.453 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.453 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.453 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.453 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.453 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.453 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.453 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.453 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.453 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.453 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.453 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.453 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.453 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.453 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.453 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.453 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.453 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.453 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.453 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.453 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.453 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.453 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.453 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.453 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.453 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.453 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.453 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.453 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.453 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.453 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.453 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.453 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.453 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.453 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.453 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.453 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.453 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.453 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.453 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.453 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.453 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.453 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.453 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.453 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.453 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.453 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.453 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.453 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.453 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.453 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.453 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.453 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.453 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.453 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.453 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.453 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.453 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.453 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.453 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.453 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.453 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.453 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.453 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.453 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.453 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.453 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.453 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.453 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.453 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.453 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.453 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.453 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.453 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.453 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.453 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.453 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.453 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.453 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.453 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.453 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.453 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.453 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.453 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.453 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.453 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.453 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.453 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.453 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.454 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.454 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.454 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.454 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.454 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.454 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.454 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.454 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.454 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.454 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.454 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.454 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.454 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.454 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.454 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.454 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.454 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.454 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.454 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.454 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.454 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.454 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.454 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.454 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.454 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.454 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.454 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.454 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.454 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.454 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.454 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.454 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.454 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.454 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.454 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.454 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.454 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.454 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.454 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.454 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.454 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.454 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.454 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.454 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.454 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.454 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.454 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.454 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.454 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.454 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.454 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.454 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.454 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.454 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.454 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.454 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.454 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.454 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:03:57.454 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:57.454 22:42:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:57.454 22:42:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:57.454 22:42:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:03:57.454 22:42:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:57.454 22:42:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:57.454 22:42:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:57.454 22:42:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:57.454 22:42:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:57.454 22:42:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:57.454 22:42:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:57.454 22:42:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:57.454 22:42:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:57.454 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:57.454 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:03:57.454 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:57.454 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:57.454 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:57.454 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:57.454 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:57.454 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:57.454 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:57.454 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.454 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.454 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32635652 kB' 'MemFree: 26768220 kB' 'MemUsed: 5867432 kB' 'SwapCached: 0 kB' 'Active: 3116320 kB' 'Inactive: 183336 kB' 'Active(anon): 2880524 kB' 'Inactive(anon): 0 kB' 'Active(file): 235796 kB' 'Inactive(file): 183336 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2998624 kB' 'Mapped: 44580 kB' 'AnonPages: 304320 kB' 'Shmem: 2579492 kB' 'KernelStack: 10744 kB' 'PageTables: 3944 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 88228 kB' 'Slab: 335944 kB' 'SReclaimable: 88228 kB' 'SUnreclaim: 247716 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:57.454 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.454 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.454 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.454 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.454 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.454 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.454 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.454 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.454 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.454 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.454 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.454 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.454 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.454 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.454 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.454 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.454 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.454 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.454 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.454 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.454 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.454 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.454 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.454 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.454 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.454 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.454 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.454 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.454 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.455 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.455 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.455 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.455 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.455 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.455 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.455 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.455 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.455 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.455 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.455 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.455 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.455 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.455 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.455 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.455 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.455 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.455 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.455 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.455 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.455 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.455 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.455 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.455 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.455 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.455 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.455 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.455 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.455 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.455 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.455 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.455 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.455 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.455 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.455 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.455 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.455 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.455 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.455 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.455 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.455 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.455 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.455 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.455 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.455 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.455 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.455 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.455 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.455 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.455 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.455 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.455 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.455 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.455 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.455 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.455 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.455 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.455 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.455 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.455 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.455 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.455 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.455 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.455 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.455 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.455 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.455 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.455 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.455 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.455 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.455 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.455 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.455 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.455 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.455 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.455 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.455 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.455 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.455 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.455 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.455 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.455 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.455 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.455 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.455 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.455 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.455 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.455 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.455 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.455 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.455 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.455 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.455 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.455 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.455 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.455 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.455 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.455 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.455 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.455 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.455 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.455 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.455 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.455 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.455 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.455 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.455 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.455 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.455 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.455 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.455 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.455 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.455 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.455 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.455 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.455 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.455 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:57.455 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:57.455 22:42:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:57.455 22:42:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:57.455 22:42:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:57.456 22:42:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:57.456 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:57.456 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:03:57.456 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:57.456 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:57.456 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:57.456 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:57.456 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:57.456 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:57.456 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:57.456 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.456 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.456 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27659312 kB' 'MemFree: 16845552 kB' 'MemUsed: 10813760 kB' 'SwapCached: 0 kB' 'Active: 4799756 kB' 'Inactive: 3243344 kB' 'Active(anon): 4643884 kB' 'Inactive(anon): 0 kB' 'Active(file): 155872 kB' 'Inactive(file): 3243344 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7800932 kB' 'Mapped: 111924 kB' 'AnonPages: 242368 kB' 'Shmem: 4401716 kB' 'KernelStack: 8008 kB' 'PageTables: 3636 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 108724 kB' 'Slab: 263384 kB' 'SReclaimable: 108724 kB' 'SUnreclaim: 154660 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:57.456 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.456 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.456 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.456 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.456 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.456 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.456 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.456 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.456 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.456 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.456 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.456 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.456 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.456 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.456 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.456 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.456 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.456 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.456 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.456 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.456 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.456 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.456 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.456 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.456 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.456 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.456 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.456 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.456 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.456 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.456 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.456 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.456 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.456 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.456 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.456 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.456 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.456 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.456 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.456 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.456 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.456 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.456 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.456 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.456 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.456 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.456 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.456 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.456 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.456 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.456 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.456 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.456 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.456 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.456 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.456 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.456 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.456 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.456 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.456 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.456 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.456 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.456 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.456 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.456 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.456 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.456 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.456 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.456 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.456 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.456 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.456 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.456 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.456 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.456 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.456 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.456 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.456 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.456 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.456 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.456 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.456 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.456 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.456 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.456 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.456 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.456 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.456 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.456 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.456 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.456 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.456 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.456 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.456 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.456 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.456 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.456 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.456 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.456 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.456 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.456 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.457 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.457 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.457 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.457 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.457 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.457 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.457 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.457 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.457 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.457 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.457 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.457 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.457 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.457 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.457 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.457 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.457 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.457 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.457 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.457 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.457 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.457 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.457 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.457 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.457 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.457 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.457 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.457 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.457 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.457 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.457 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.457 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.457 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.457 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.457 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.458 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.458 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.458 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.458 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.458 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.458 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.458 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.458 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.458 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.458 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:57.458 22:42:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:57.458 22:42:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:57.458 22:42:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:57.458 22:42:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:57.458 22:42:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:57.458 22:42:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:57.458 node0=512 expecting 512 00:03:57.458 22:42:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:57.458 22:42:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:57.458 22:42:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:57.458 22:42:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:03:57.458 node1=1024 expecting 1024 00:03:57.458 22:42:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:03:57.458 00:03:57.458 real 0m4.868s 00:03:57.458 user 0m1.835s 00:03:57.458 sys 0m3.100s 00:03:57.458 22:42:55 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:57.458 22:42:55 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:57.458 ************************************ 00:03:57.458 END TEST custom_alloc 00:03:57.458 ************************************ 00:03:57.458 22:42:55 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:03:57.458 22:42:55 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:57.458 22:42:55 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:57.458 22:42:55 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:57.458 ************************************ 00:03:57.458 START TEST no_shrink_alloc 00:03:57.458 ************************************ 00:03:57.458 22:42:55 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1125 -- # no_shrink_alloc 00:03:57.458 22:42:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:03:57.458 22:42:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:57.458 22:42:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:57.458 22:42:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:03:57.458 22:42:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:57.458 22:42:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:57.458 22:42:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:57.458 22:42:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:57.458 22:42:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:57.458 22:42:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:57.458 22:42:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:57.458 22:42:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:57.458 22:42:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:57.458 22:42:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:57.458 22:42:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:57.458 22:42:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:57.458 22:42:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:57.458 22:42:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:57.458 22:42:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:57.458 22:42:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:03:57.458 22:42:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:57.458 22:42:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:04:00.745 0000:dd:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:00.745 0000:df:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:00.745 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:00.745 0000:de:00.0 (8086 0953): Already using the vfio-pci driver 00:04:00.745 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:00.745 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:00.745 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:00.745 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:00.745 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:00.745 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:00.745 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:00.745 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:00.745 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:00.745 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:00.745 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:00.745 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:00.745 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:00.745 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:00.745 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:00.745 0000:dc:00.0 (8086 0953): Already using the vfio-pci driver 00:04:02.123 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:02.123 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:02.124 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:02.124 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:02.124 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:02.124 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:02.124 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:02.124 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:02.124 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:02.124 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:02.124 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:02.124 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:02.124 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:02.124 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.124 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:02.124 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:02.124 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.124 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.124 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.124 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.124 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60294964 kB' 'MemFree: 44666104 kB' 'MemAvailable: 48085776 kB' 'Buffers: 11052 kB' 'Cached: 10788608 kB' 'SwapCached: 0 kB' 'Active: 7919044 kB' 'Inactive: 3426680 kB' 'Active(anon): 7527376 kB' 'Inactive(anon): 0 kB' 'Active(file): 391668 kB' 'Inactive(file): 3426680 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 549508 kB' 'Mapped: 156660 kB' 'Shmem: 6981312 kB' 'KReclaimable: 196952 kB' 'Slab: 598952 kB' 'SReclaimable: 196952 kB' 'SUnreclaim: 402000 kB' 'KernelStack: 19008 kB' 'PageTables: 8864 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487508 kB' 'Committed_AS: 8859896 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 207832 kB' 'VmallocChunk: 0 kB' 'Percpu: 59904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 445396 kB' 'DirectMap2M: 10768384 kB' 'DirectMap1G: 57671680 kB' 00:04:02.124 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.124 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.124 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.124 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.124 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.124 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.124 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.124 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.124 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.124 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.124 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.124 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.124 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.124 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.124 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.124 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.124 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.124 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.124 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.124 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.124 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.124 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.124 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.124 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.124 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.124 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.124 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.124 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.124 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.124 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.124 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.124 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.124 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.124 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.124 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.124 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.124 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.124 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.124 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.124 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.124 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.124 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.124 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.124 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.124 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.124 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.124 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.124 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.124 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.124 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.124 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.124 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.124 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.124 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.124 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.124 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.124 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.124 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.124 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.124 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.124 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.124 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.124 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.124 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.124 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.124 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.124 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.124 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.124 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.124 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.124 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.124 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.124 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.124 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.124 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.124 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.124 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.124 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.124 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.124 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.124 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.124 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.124 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.124 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.124 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.124 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.125 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.125 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.125 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.125 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.125 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.125 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.125 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.125 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.125 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.125 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.125 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.125 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.125 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.125 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.125 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.125 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.125 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.125 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.125 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.125 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.125 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.125 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.125 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.125 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.125 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.125 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.125 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.125 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.125 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.125 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.125 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.125 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.125 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.125 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.125 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.125 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.125 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.125 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.125 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.125 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.125 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.125 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.125 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.125 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.125 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.125 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.125 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.125 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.125 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.125 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.125 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.125 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.125 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.125 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.125 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.125 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.125 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.125 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.125 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.125 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.125 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.125 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.125 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.125 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.125 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.125 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.125 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.125 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.125 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.125 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.125 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.125 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.125 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.125 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.125 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.125 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:02.125 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:02.125 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:02.125 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:02.125 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:02.125 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:02.125 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:02.125 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:02.125 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.125 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:02.125 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:02.125 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.125 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.125 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.125 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.125 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60294964 kB' 'MemFree: 44666616 kB' 'MemAvailable: 48086792 kB' 'Buffers: 11052 kB' 'Cached: 10788612 kB' 'SwapCached: 0 kB' 'Active: 7918904 kB' 'Inactive: 3426680 kB' 'Active(anon): 7527236 kB' 'Inactive(anon): 0 kB' 'Active(file): 391668 kB' 'Inactive(file): 3426680 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 549320 kB' 'Mapped: 156620 kB' 'Shmem: 6981316 kB' 'KReclaimable: 196952 kB' 'Slab: 598844 kB' 'SReclaimable: 196952 kB' 'SUnreclaim: 401892 kB' 'KernelStack: 18864 kB' 'PageTables: 8244 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487508 kB' 'Committed_AS: 8861408 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 207816 kB' 'VmallocChunk: 0 kB' 'Percpu: 59904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 445396 kB' 'DirectMap2M: 10768384 kB' 'DirectMap1G: 57671680 kB' 00:04:02.125 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.125 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.125 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.125 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.125 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.125 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.125 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.125 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.125 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.125 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.125 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.125 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.125 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.125 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.125 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.125 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.126 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.126 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.126 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.126 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.126 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.126 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.126 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.126 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.126 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.126 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.126 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.126 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.126 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.126 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.126 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.126 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.126 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.126 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.126 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.126 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.126 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.126 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.126 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.126 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.126 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.126 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.126 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.126 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.126 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.126 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.126 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.126 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.126 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.126 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.126 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.126 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.126 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.126 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.126 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.126 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.126 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.126 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.126 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.126 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.126 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.126 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.126 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.126 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.126 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.126 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.126 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.126 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.126 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.126 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.126 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.126 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.126 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.126 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.126 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.126 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.126 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.126 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.126 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.126 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.126 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.126 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.126 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.126 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.126 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.126 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.126 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.126 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.126 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.126 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.126 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.126 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.126 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.126 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.126 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.126 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.126 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.126 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.126 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.126 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.126 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.126 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.126 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.126 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.126 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.126 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.126 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.126 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.126 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.126 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.126 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.126 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.126 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.126 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.126 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.126 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.126 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.126 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.126 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.126 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.126 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.126 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.126 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.126 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.126 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.126 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.126 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.126 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.126 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.126 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.126 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.126 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.127 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.127 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.127 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.127 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.127 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.127 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.127 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.127 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.127 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.127 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.127 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.127 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.127 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.127 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.127 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.127 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.127 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.127 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.127 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.127 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.127 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.127 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.127 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.127 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.127 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.127 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.127 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.127 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.127 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.127 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.127 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.127 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.127 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.127 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.127 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.127 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.127 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.127 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.127 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.127 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.127 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.127 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.127 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.127 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.127 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.127 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.127 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.127 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.127 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.127 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.127 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.127 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.127 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.127 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.127 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.127 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.127 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.127 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.127 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.127 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.127 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.127 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.127 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.127 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.127 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.127 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.127 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.127 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.127 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.127 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.127 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.127 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.127 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.127 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:02.127 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:02.127 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:02.127 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:02.127 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:02.127 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:02.127 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:02.127 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:02.127 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.127 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:02.127 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:02.127 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.127 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.127 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.127 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.127 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60294964 kB' 'MemFree: 44665780 kB' 'MemAvailable: 48085452 kB' 'Buffers: 11052 kB' 'Cached: 10788628 kB' 'SwapCached: 0 kB' 'Active: 7918784 kB' 'Inactive: 3426680 kB' 'Active(anon): 7527116 kB' 'Inactive(anon): 0 kB' 'Active(file): 391668 kB' 'Inactive(file): 3426680 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 549208 kB' 'Mapped: 156620 kB' 'Shmem: 6981332 kB' 'KReclaimable: 196952 kB' 'Slab: 598880 kB' 'SReclaimable: 196952 kB' 'SUnreclaim: 401928 kB' 'KernelStack: 19056 kB' 'PageTables: 8036 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487508 kB' 'Committed_AS: 8861428 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 207832 kB' 'VmallocChunk: 0 kB' 'Percpu: 59904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 445396 kB' 'DirectMap2M: 10768384 kB' 'DirectMap1G: 57671680 kB' 00:04:02.127 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.127 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.127 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.127 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.127 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.127 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.127 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.390 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.390 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.390 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.390 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.390 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.390 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.390 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.390 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.390 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.390 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.390 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.390 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.390 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.390 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.390 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.390 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.390 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.390 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.390 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.390 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.390 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.390 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.390 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.390 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.390 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.390 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.390 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.390 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.390 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.390 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.390 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.390 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.390 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.390 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.390 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.390 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.390 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.390 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.390 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.390 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.390 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.390 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.390 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.390 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.390 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.390 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.390 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.390 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.390 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.390 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.390 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.390 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.390 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.390 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.390 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.390 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.390 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.390 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.390 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.390 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.390 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.390 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.390 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.390 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.390 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.390 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.390 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.390 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.390 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.390 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.390 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.390 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.390 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.390 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.391 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.391 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.391 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.391 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.391 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.391 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.391 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.391 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.391 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.391 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.391 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.391 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.391 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.391 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.391 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.391 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.391 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.391 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.391 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.391 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.391 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.391 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.391 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.391 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.391 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.391 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.391 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.391 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.391 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.391 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.391 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.391 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.391 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.391 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.391 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.391 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.391 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.391 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.391 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.391 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.391 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.391 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.391 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.391 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.391 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.391 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.391 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.391 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.391 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.391 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.391 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.391 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.391 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.391 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.391 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.391 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.391 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.391 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.391 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.391 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.391 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.391 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.391 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.391 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.391 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.391 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.391 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.391 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.391 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.391 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.391 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.391 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.391 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.391 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.391 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.391 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.391 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.391 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.391 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.391 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.391 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.391 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.391 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.391 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.391 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.391 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.391 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.391 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.391 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.391 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.391 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.391 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.391 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.391 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.391 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.391 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.391 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.391 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.391 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.391 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.391 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.391 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.391 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.391 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.391 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.391 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.391 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.391 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.392 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.392 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.392 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.392 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.392 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.392 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.392 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.392 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.392 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.392 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.392 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.392 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.392 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:02.392 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:02.392 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:02.392 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:02.392 nr_hugepages=1024 00:04:02.392 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:02.392 resv_hugepages=0 00:04:02.392 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:02.392 surplus_hugepages=0 00:04:02.392 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:02.392 anon_hugepages=0 00:04:02.392 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:02.392 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:02.392 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:02.392 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:02.392 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:02.392 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:02.392 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:02.392 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.392 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:02.392 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:02.392 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.392 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.392 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.392 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.392 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60294964 kB' 'MemFree: 44664032 kB' 'MemAvailable: 48083704 kB' 'Buffers: 11052 kB' 'Cached: 10788652 kB' 'SwapCached: 0 kB' 'Active: 7918876 kB' 'Inactive: 3426680 kB' 'Active(anon): 7527208 kB' 'Inactive(anon): 0 kB' 'Active(file): 391668 kB' 'Inactive(file): 3426680 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 549208 kB' 'Mapped: 156620 kB' 'Shmem: 6981356 kB' 'KReclaimable: 196952 kB' 'Slab: 598880 kB' 'SReclaimable: 196952 kB' 'SUnreclaim: 401928 kB' 'KernelStack: 19024 kB' 'PageTables: 7900 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487508 kB' 'Committed_AS: 8859976 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 207880 kB' 'VmallocChunk: 0 kB' 'Percpu: 59904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 445396 kB' 'DirectMap2M: 10768384 kB' 'DirectMap1G: 57671680 kB' 00:04:02.392 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.392 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.392 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.392 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.392 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.392 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.392 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.392 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.392 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.392 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.392 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.392 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.392 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.392 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.392 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.392 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.392 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.392 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.392 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.392 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.392 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.392 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.392 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.392 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.392 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.392 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.392 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.392 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.392 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.392 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.392 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.392 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.392 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.392 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.392 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.392 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.392 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.392 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.392 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.392 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.392 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.392 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.392 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.392 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.392 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.392 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.392 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.392 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.392 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.392 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.392 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.392 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.392 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.392 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.392 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.392 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.392 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.392 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.392 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.392 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.392 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.392 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.392 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.392 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.392 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.392 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.392 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.392 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.392 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.393 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.393 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.393 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.393 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.393 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.393 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.393 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.393 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.393 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.393 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.393 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.393 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.393 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.393 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.393 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.393 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.393 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.393 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.393 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.393 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.393 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.393 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.393 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.393 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.393 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.393 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.393 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.393 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.393 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.393 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.393 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.393 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.393 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.393 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.393 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.393 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.393 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.393 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.393 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.393 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.393 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.393 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.393 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.393 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.393 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.393 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.393 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.393 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.393 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.393 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.393 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.393 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.393 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.393 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.393 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.393 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.393 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.393 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.393 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.393 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.393 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.393 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.393 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.393 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.393 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.393 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.393 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.393 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.393 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.393 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.393 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.393 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.393 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.393 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.393 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.393 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.393 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.393 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.393 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.393 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.393 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.393 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.393 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.393 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.393 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.393 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.393 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.393 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.393 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.393 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.393 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.393 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.393 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.393 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.393 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.393 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.393 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.393 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.393 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.393 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.393 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.393 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.393 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.393 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.393 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.393 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.393 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.393 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.393 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.394 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.394 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.394 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.394 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.394 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.394 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.394 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.394 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.394 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.394 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.394 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.394 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.394 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.394 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.394 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.394 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:02.394 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:02.394 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:02.394 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:02.394 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:02.394 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:02.394 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:02.394 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:02.394 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:02.394 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:02.394 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:02.394 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:02.394 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:02.394 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:02.394 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:02.394 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:02.394 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:02.394 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:02.394 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.394 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:02.394 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:02.394 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.394 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.394 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.394 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.394 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32635652 kB' 'MemFree: 25720152 kB' 'MemUsed: 6915500 kB' 'SwapCached: 0 kB' 'Active: 3119560 kB' 'Inactive: 183336 kB' 'Active(anon): 2883764 kB' 'Inactive(anon): 0 kB' 'Active(file): 235796 kB' 'Inactive(file): 183336 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2998660 kB' 'Mapped: 44636 kB' 'AnonPages: 307476 kB' 'Shmem: 2579528 kB' 'KernelStack: 10968 kB' 'PageTables: 4632 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 88228 kB' 'Slab: 335528 kB' 'SReclaimable: 88228 kB' 'SUnreclaim: 247300 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:02.394 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.394 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.394 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.394 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.394 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.394 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.394 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.394 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.394 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.394 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.394 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.394 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.394 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.394 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.394 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.394 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.394 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.394 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.394 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.394 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.394 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.394 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.394 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.394 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.394 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.394 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.394 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.394 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.394 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.394 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.394 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.394 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.394 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.394 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.394 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.394 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.394 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.394 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.394 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.394 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.394 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.394 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.394 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.394 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.394 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.394 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.394 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.394 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.394 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.394 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.394 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.394 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.394 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.394 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.394 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.394 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.394 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.394 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.394 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.394 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.394 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.394 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.394 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.394 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.394 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.394 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.394 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.394 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.395 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.395 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.395 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.395 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.395 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.395 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.395 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.395 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.395 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.395 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.395 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.395 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.395 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.395 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.395 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.395 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.395 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.395 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.395 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.395 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.395 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.395 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.395 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.395 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.395 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.395 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.395 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.395 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.395 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.395 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.395 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.395 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.395 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.395 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.395 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.395 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.395 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.395 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.395 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.395 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.395 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.395 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.395 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.395 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.395 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.395 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.395 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.395 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.395 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.395 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.395 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.395 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.395 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.395 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.395 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.395 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.395 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.395 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.395 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.395 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.395 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.395 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.395 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.395 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.395 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.395 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.395 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.395 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.395 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.395 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.395 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.395 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.395 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.395 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.395 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.395 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.395 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.395 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:02.395 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:02.395 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:02.395 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:02.395 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:02.395 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:02.395 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:02.396 node0=1024 expecting 1024 00:04:02.396 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:02.396 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:02.396 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:02.396 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:04:02.396 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:02.396 22:43:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:04:05.685 0000:dd:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:05.685 0000:df:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:05.685 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:05.685 0000:de:00.0 (8086 0953): Already using the vfio-pci driver 00:04:05.685 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:05.685 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:05.685 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:05.685 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:05.685 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:05.685 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:05.685 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:05.685 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:05.685 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:05.685 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:05.685 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:05.685 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:05.685 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:05.685 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:05.685 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:05.685 0000:dc:00.0 (8086 0953): Already using the vfio-pci driver 00:04:07.066 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:07.066 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:07.066 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:07.066 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:07.066 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:07.066 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:07.066 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:07.066 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:07.066 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:07.066 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:07.066 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:07.066 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:07.066 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:07.066 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:07.066 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:07.066 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:07.066 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:07.066 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:07.066 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:07.066 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.066 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.066 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60294964 kB' 'MemFree: 44653764 kB' 'MemAvailable: 48073420 kB' 'Buffers: 11052 kB' 'Cached: 10788772 kB' 'SwapCached: 0 kB' 'Active: 7917004 kB' 'Inactive: 3426680 kB' 'Active(anon): 7525336 kB' 'Inactive(anon): 0 kB' 'Active(file): 391668 kB' 'Inactive(file): 3426680 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 546264 kB' 'Mapped: 156948 kB' 'Shmem: 6981476 kB' 'KReclaimable: 196920 kB' 'Slab: 598848 kB' 'SReclaimable: 196920 kB' 'SUnreclaim: 401928 kB' 'KernelStack: 18720 kB' 'PageTables: 7528 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487508 kB' 'Committed_AS: 8859816 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 207656 kB' 'VmallocChunk: 0 kB' 'Percpu: 59904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 445396 kB' 'DirectMap2M: 10768384 kB' 'DirectMap1G: 57671680 kB' 00:04:07.066 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.066 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.066 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.066 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.066 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.066 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.066 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.066 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.066 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.066 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.066 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.066 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.066 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.066 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.066 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.066 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.066 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.066 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.066 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.066 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.066 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.066 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.066 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.066 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.066 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.067 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.067 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.067 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.067 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.067 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.067 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.067 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.067 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.067 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.067 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.067 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.067 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.067 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.067 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.067 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.067 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.067 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.067 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.067 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.067 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.067 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.067 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.067 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.067 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.067 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.067 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.067 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.067 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.067 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.067 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.067 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.067 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.067 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.067 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.067 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.067 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.067 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.067 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.067 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.067 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.067 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.067 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.067 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.067 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.067 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.067 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.067 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.067 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.067 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.067 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.067 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.067 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.067 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.067 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.067 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.067 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.067 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.067 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.067 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.067 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.067 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.067 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.067 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.067 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.067 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.067 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.067 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.067 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.067 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.067 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.067 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.067 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.067 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.067 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.067 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.067 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.067 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.067 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.067 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.067 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.067 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.067 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.067 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.067 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.067 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.067 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.067 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.067 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.067 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.067 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.067 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.067 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.067 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.067 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.067 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.067 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.067 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.067 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.067 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.067 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.067 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.067 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.067 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.067 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.067 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.067 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.067 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.067 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.067 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.067 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.067 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.067 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.067 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.067 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.067 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.067 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.067 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.067 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.068 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.068 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.068 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.068 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.068 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.068 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.068 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.068 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.068 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.068 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.068 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.068 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.068 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.068 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.068 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.068 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.068 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.068 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.068 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:07.068 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:07.068 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:07.068 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:07.068 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:07.068 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:07.068 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:07.068 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:07.068 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:07.068 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:07.068 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:07.068 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:07.068 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:07.068 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.068 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.068 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60294964 kB' 'MemFree: 44653996 kB' 'MemAvailable: 48073652 kB' 'Buffers: 11052 kB' 'Cached: 10788780 kB' 'SwapCached: 0 kB' 'Active: 7915836 kB' 'Inactive: 3426680 kB' 'Active(anon): 7524168 kB' 'Inactive(anon): 0 kB' 'Active(file): 391668 kB' 'Inactive(file): 3426680 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 545976 kB' 'Mapped: 156680 kB' 'Shmem: 6981484 kB' 'KReclaimable: 196920 kB' 'Slab: 598796 kB' 'SReclaimable: 196920 kB' 'SUnreclaim: 401876 kB' 'KernelStack: 18736 kB' 'PageTables: 7540 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487508 kB' 'Committed_AS: 8859832 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 207640 kB' 'VmallocChunk: 0 kB' 'Percpu: 59904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 445396 kB' 'DirectMap2M: 10768384 kB' 'DirectMap1G: 57671680 kB' 00:04:07.068 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.068 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.068 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.068 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.068 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.068 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.068 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.068 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.068 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.068 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.068 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.068 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.068 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.068 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.068 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.068 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.068 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.068 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.068 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.068 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.068 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.068 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.068 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.068 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.068 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.068 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.068 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.068 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.068 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.068 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.068 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.068 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.068 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.068 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.068 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.068 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.068 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.068 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.068 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.068 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.068 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.068 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.068 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.068 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.068 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.068 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.068 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.068 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.068 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.068 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.068 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.068 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.068 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.068 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.068 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.068 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.068 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.068 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.068 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.068 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.068 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.068 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.068 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.068 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.068 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.068 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.068 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.068 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.068 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.068 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.068 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.068 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.069 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.069 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.069 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.069 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.069 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.069 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.069 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.069 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.069 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.069 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.069 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.069 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.069 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.069 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.069 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.069 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.069 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.069 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.069 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.069 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.069 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.069 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.069 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.069 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.069 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.069 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.069 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.069 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.069 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.069 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.069 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.069 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.069 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.069 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.069 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.069 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.069 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.069 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.069 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.069 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.069 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.069 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.069 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.069 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.069 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.069 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.069 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.069 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.069 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.069 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.069 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.069 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.069 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.069 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.069 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.069 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.069 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.069 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.069 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.069 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.069 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.069 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.069 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.069 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.069 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.069 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.069 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.069 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.069 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.069 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.069 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.069 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.069 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.069 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.069 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.069 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.069 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.069 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.069 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.069 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.069 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.069 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.069 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.069 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.069 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.069 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.069 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.069 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.069 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.069 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.069 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.069 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.069 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.069 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.069 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.069 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.069 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.069 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.069 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.069 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.069 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.069 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.069 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.069 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.069 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.069 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.069 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.069 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.069 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.069 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.069 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.069 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.069 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.069 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.069 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.069 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.069 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.069 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.069 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.070 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.070 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.070 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.070 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.070 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.070 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.070 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.070 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.070 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.070 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.070 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.070 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.070 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.070 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.070 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:07.070 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:07.070 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:07.070 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:07.070 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:07.070 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:07.070 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:07.070 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:07.070 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:07.070 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:07.070 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:07.070 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:07.070 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:07.070 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.070 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.070 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60294964 kB' 'MemFree: 44654680 kB' 'MemAvailable: 48074336 kB' 'Buffers: 11052 kB' 'Cached: 10788796 kB' 'SwapCached: 0 kB' 'Active: 7915836 kB' 'Inactive: 3426680 kB' 'Active(anon): 7524168 kB' 'Inactive(anon): 0 kB' 'Active(file): 391668 kB' 'Inactive(file): 3426680 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 545972 kB' 'Mapped: 156680 kB' 'Shmem: 6981500 kB' 'KReclaimable: 196920 kB' 'Slab: 598796 kB' 'SReclaimable: 196920 kB' 'SUnreclaim: 401876 kB' 'KernelStack: 18736 kB' 'PageTables: 7540 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487508 kB' 'Committed_AS: 8859856 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 207640 kB' 'VmallocChunk: 0 kB' 'Percpu: 59904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 445396 kB' 'DirectMap2M: 10768384 kB' 'DirectMap1G: 57671680 kB' 00:04:07.070 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.070 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.070 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.070 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.070 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.070 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.070 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.070 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.070 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.070 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.070 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.070 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.070 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.070 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.070 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.070 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.070 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.070 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.070 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.070 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.070 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.070 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.070 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.070 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.070 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.070 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.070 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.070 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.070 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.070 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.070 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.070 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.070 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.070 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.070 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.070 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.070 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.070 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.070 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.070 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.070 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.070 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.070 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.070 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.070 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.070 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.070 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.070 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.070 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.070 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.070 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.070 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.070 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.070 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.070 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.070 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.070 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.070 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.070 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.070 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.070 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.070 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.070 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.070 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.070 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.070 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.070 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.070 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.070 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.070 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.070 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.070 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.070 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.070 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.070 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.071 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.071 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.071 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.071 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.071 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.071 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.071 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.071 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.071 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.071 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.071 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.071 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.071 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.071 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.071 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.071 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.071 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.071 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.071 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.071 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.071 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.071 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.071 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.071 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.071 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.071 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.071 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.071 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.071 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.071 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.071 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.071 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.071 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.071 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.071 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.071 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.071 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.071 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.071 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.071 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.071 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.071 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.071 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.071 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.071 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.071 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.071 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.071 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.071 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.071 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.071 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.071 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.071 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.071 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.071 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.071 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.071 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.071 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.071 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.071 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.071 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.071 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.071 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.071 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.071 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.071 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.071 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.071 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.071 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.071 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.071 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.071 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.071 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.071 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.071 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.071 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.071 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.071 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.071 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.071 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.071 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.071 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.071 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.071 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.071 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.071 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.071 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.071 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.071 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.071 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.071 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.071 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.071 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.071 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.071 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.071 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.072 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.072 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.072 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.072 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.072 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.072 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.072 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.072 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.072 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.072 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.072 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.072 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.072 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.072 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.072 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.072 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.072 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.072 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.072 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.072 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.072 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.072 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.072 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.072 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.072 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.072 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.072 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.072 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.072 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.072 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.072 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:07.072 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:07.072 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:07.072 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:07.072 nr_hugepages=1024 00:04:07.072 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:07.072 resv_hugepages=0 00:04:07.072 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:07.072 surplus_hugepages=0 00:04:07.072 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:07.072 anon_hugepages=0 00:04:07.072 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:07.072 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:07.072 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:07.072 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:07.072 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:07.072 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:07.072 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:07.072 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:07.072 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:07.072 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:07.072 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:07.072 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:07.072 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.072 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.072 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60294964 kB' 'MemFree: 44654788 kB' 'MemAvailable: 48074444 kB' 'Buffers: 11052 kB' 'Cached: 10788816 kB' 'SwapCached: 0 kB' 'Active: 7915888 kB' 'Inactive: 3426680 kB' 'Active(anon): 7524220 kB' 'Inactive(anon): 0 kB' 'Active(file): 391668 kB' 'Inactive(file): 3426680 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 545972 kB' 'Mapped: 156680 kB' 'Shmem: 6981520 kB' 'KReclaimable: 196920 kB' 'Slab: 598796 kB' 'SReclaimable: 196920 kB' 'SUnreclaim: 401876 kB' 'KernelStack: 18736 kB' 'PageTables: 7540 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487508 kB' 'Committed_AS: 8859880 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 207640 kB' 'VmallocChunk: 0 kB' 'Percpu: 59904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 445396 kB' 'DirectMap2M: 10768384 kB' 'DirectMap1G: 57671680 kB' 00:04:07.072 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.072 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.072 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.072 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.072 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.072 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.072 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.072 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.072 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.072 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.072 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.072 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.072 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.072 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.072 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.072 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.072 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.072 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.072 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.072 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.072 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.072 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.072 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.072 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.072 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.072 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.072 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.072 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.072 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.072 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.072 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.072 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.072 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.072 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.072 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.072 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.072 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.072 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.072 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.072 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.072 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.072 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.072 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.072 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.072 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.072 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.072 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.072 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.072 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.072 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.072 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.072 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.073 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.073 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.073 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.073 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.073 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.073 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.073 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.073 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.073 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.073 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.073 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.073 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.073 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.073 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.073 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.073 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.073 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.073 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.073 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.073 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.073 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.073 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.073 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.073 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.073 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.073 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.073 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.073 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.073 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.073 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.073 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.073 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.073 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.073 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.073 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.073 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.073 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.073 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.073 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.073 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.073 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.073 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.073 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.073 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.073 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.073 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.073 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.073 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.073 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.073 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.073 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.073 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.073 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.073 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.073 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.073 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.073 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.073 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.073 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.073 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.073 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.073 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.073 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.073 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.073 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.073 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.073 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.073 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.073 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.073 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.073 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.073 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.073 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.073 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.073 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.073 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.073 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.073 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.073 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.073 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.073 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.073 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.073 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.073 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.073 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.073 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.073 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.073 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.073 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.073 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.073 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.073 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.073 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.073 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.073 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.073 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.073 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.073 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.073 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.073 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.073 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.073 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.073 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.073 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.073 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.073 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.073 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.073 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.073 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.073 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.073 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.073 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.073 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.073 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.073 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.073 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.073 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.073 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.074 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.074 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.074 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.074 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.074 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.074 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.074 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.074 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.074 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.074 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.074 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.074 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.074 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.074 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.074 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.074 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.074 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.074 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.074 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.074 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.074 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.074 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.074 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.074 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:07.074 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:07.074 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:07.074 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:07.074 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:07.074 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:07.074 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:07.074 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:07.074 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:07.074 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:07.074 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:07.074 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:07.074 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:07.074 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:07.074 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:07.074 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:07.074 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:07.074 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:07.074 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:07.074 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:07.074 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:07.074 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:07.074 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:07.074 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.074 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.074 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32635652 kB' 'MemFree: 25729308 kB' 'MemUsed: 6906344 kB' 'SwapCached: 0 kB' 'Active: 3116516 kB' 'Inactive: 183336 kB' 'Active(anon): 2880720 kB' 'Inactive(anon): 0 kB' 'Active(file): 235796 kB' 'Inactive(file): 183336 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2998712 kB' 'Mapped: 44672 kB' 'AnonPages: 304304 kB' 'Shmem: 2579580 kB' 'KernelStack: 10696 kB' 'PageTables: 3808 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 88196 kB' 'Slab: 335080 kB' 'SReclaimable: 88196 kB' 'SUnreclaim: 246884 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:07.074 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.074 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.074 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.074 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.074 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.074 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.074 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.074 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.074 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.074 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.074 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.074 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.074 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.074 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.074 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.074 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.074 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.074 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.074 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.074 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.074 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.074 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.074 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.074 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.074 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.074 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.074 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.074 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.074 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.074 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.074 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.074 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.074 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.074 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.074 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.074 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.074 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.074 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.074 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.074 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.074 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.074 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.074 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.074 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.074 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.074 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.074 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.074 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.074 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.074 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.074 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.074 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.074 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.074 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.074 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.074 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.074 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.074 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.074 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.074 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.074 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.075 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.075 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.075 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.075 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.075 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.075 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.075 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.075 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.075 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.075 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.075 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.075 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.075 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.075 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.075 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.075 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.075 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.075 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.075 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.075 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.075 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.075 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.075 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.075 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.075 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.075 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.075 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.075 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.075 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.075 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.075 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.075 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.075 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.075 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.075 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.075 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.075 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.075 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.075 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.075 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.075 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.075 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.075 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.075 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.075 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.075 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.075 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.075 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.075 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.075 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.075 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.075 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.075 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.075 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.075 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.075 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.075 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.075 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.075 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.075 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.075 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.075 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.075 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.075 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.075 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.075 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.075 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.075 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.075 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.075 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.075 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.075 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.075 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.075 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.075 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.075 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.075 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.075 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.075 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.075 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.075 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.075 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.075 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.075 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.075 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:07.075 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:07.075 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:07.075 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:07.075 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:07.075 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:07.075 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:07.075 node0=1024 expecting 1024 00:04:07.075 22:43:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:07.075 00:04:07.075 real 0m9.593s 00:04:07.075 user 0m3.721s 00:04:07.075 sys 0m6.010s 00:04:07.075 22:43:05 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:07.075 22:43:05 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:07.075 ************************************ 00:04:07.075 END TEST no_shrink_alloc 00:04:07.075 ************************************ 00:04:07.075 22:43:05 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:04:07.075 22:43:05 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:07.075 22:43:05 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:07.075 22:43:05 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:07.076 22:43:05 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:07.076 22:43:05 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:07.076 22:43:05 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:07.076 22:43:05 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:07.076 22:43:05 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:07.076 22:43:05 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:07.076 22:43:05 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:07.076 22:43:05 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:07.076 22:43:05 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:07.076 22:43:05 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:07.076 00:04:07.076 real 0m36.949s 00:04:07.076 user 0m13.520s 00:04:07.076 sys 0m21.825s 00:04:07.076 22:43:05 setup.sh.hugepages -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:07.076 22:43:05 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:07.076 ************************************ 00:04:07.076 END TEST hugepages 00:04:07.076 ************************************ 00:04:07.334 22:43:05 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/driver.sh 00:04:07.335 22:43:05 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:07.335 22:43:05 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:07.335 22:43:05 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:07.335 ************************************ 00:04:07.335 START TEST driver 00:04:07.335 ************************************ 00:04:07.335 22:43:05 setup.sh.driver -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/driver.sh 00:04:07.335 * Looking for test storage... 00:04:07.335 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:04:07.335 22:43:05 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:04:07.335 22:43:05 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:07.335 22:43:05 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:04:19.743 22:43:16 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:19.743 22:43:16 setup.sh.driver -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:19.743 22:43:16 setup.sh.driver -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:19.743 22:43:16 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:19.743 ************************************ 00:04:19.743 START TEST guess_driver 00:04:19.743 ************************************ 00:04:19.743 22:43:16 setup.sh.driver.guess_driver -- common/autotest_common.sh@1125 -- # guess_driver 00:04:19.743 22:43:16 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:19.743 22:43:16 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:04:19.743 22:43:16 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:04:19.743 22:43:16 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:04:19.743 22:43:16 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:04:19.743 22:43:16 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:19.743 22:43:16 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:19.743 22:43:16 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:04:19.743 22:43:16 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:19.743 22:43:16 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 198 > 0 )) 00:04:19.743 22:43:16 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:04:19.743 22:43:16 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:04:19.743 22:43:16 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:04:19.743 22:43:16 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:04:19.743 22:43:16 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:04:19.743 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:19.743 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:19.743 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:19.743 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:19.743 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:04:19.743 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:04:19.743 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:04:19.743 22:43:16 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:04:19.743 22:43:16 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:04:19.743 22:43:16 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:04:19.743 22:43:16 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:19.743 22:43:16 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:04:19.743 Looking for driver=vfio-pci 00:04:19.743 22:43:16 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:19.743 22:43:16 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:04:19.743 22:43:16 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:04:19.743 22:43:16 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:04:22.279 22:43:20 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:22.279 22:43:20 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:22.279 22:43:20 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:22.279 22:43:20 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:22.279 22:43:20 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:22.279 22:43:20 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:22.279 22:43:20 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:22.279 22:43:20 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:22.279 22:43:20 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:22.279 22:43:20 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:22.279 22:43:20 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:22.279 22:43:20 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:22.279 22:43:20 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:22.279 22:43:20 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:22.279 22:43:20 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:22.279 22:43:20 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:22.279 22:43:20 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:22.279 22:43:20 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:22.279 22:43:20 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:22.279 22:43:20 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:22.279 22:43:20 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:22.279 22:43:20 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:22.279 22:43:20 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:22.279 22:43:20 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:22.279 22:43:20 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:22.279 22:43:20 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:22.279 22:43:20 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:22.279 22:43:20 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:22.279 22:43:20 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:22.279 22:43:20 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:22.279 22:43:20 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:22.279 22:43:20 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:22.279 22:43:20 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:22.538 22:43:20 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:22.538 22:43:20 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:22.539 22:43:20 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:22.539 22:43:20 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:22.539 22:43:20 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:22.539 22:43:20 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:22.539 22:43:20 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:22.539 22:43:20 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:22.539 22:43:20 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:22.539 22:43:20 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:22.539 22:43:20 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:22.539 22:43:20 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:22.539 22:43:20 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:22.539 22:43:20 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:22.539 22:43:20 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:24.445 22:43:22 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:24.445 22:43:22 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:24.445 22:43:22 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:24.445 22:43:22 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:24.445 22:43:22 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:24.445 22:43:22 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:24.445 22:43:22 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:24.445 22:43:22 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:24.445 22:43:22 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:24.445 22:43:22 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:24.445 22:43:22 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:24.445 22:43:22 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:26.348 22:43:24 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:26.348 22:43:24 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:04:26.348 22:43:24 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:26.348 22:43:24 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:04:38.556 00:04:38.556 real 0m18.531s 00:04:38.556 user 0m3.757s 00:04:38.556 sys 0m6.803s 00:04:38.556 22:43:35 setup.sh.driver.guess_driver -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:38.556 22:43:35 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:04:38.556 ************************************ 00:04:38.556 END TEST guess_driver 00:04:38.556 ************************************ 00:04:38.556 00:04:38.556 real 0m30.171s 00:04:38.556 user 0m5.668s 00:04:38.556 sys 0m10.400s 00:04:38.556 22:43:35 setup.sh.driver -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:38.556 22:43:35 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:38.556 ************************************ 00:04:38.556 END TEST driver 00:04:38.556 ************************************ 00:04:38.556 22:43:35 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/devices.sh 00:04:38.556 22:43:35 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:38.556 22:43:35 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:38.556 22:43:35 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:38.556 ************************************ 00:04:38.556 START TEST devices 00:04:38.556 ************************************ 00:04:38.556 22:43:35 setup.sh.devices -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/devices.sh 00:04:38.556 * Looking for test storage... 00:04:38.556 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:04:38.556 22:43:35 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:38.556 22:43:35 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:04:38.556 22:43:35 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:38.556 22:43:35 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:04:42.749 22:43:40 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:04:42.749 22:43:40 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:42.749 22:43:40 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:42.749 22:43:40 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:42.749 22:43:40 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:42.749 22:43:40 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:42.749 22:43:40 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:42.749 22:43:40 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:42.749 22:43:40 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:42.749 22:43:40 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:42.749 22:43:40 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:04:42.749 22:43:40 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:04:42.749 22:43:40 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:42.749 22:43:40 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:42.749 22:43:40 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:42.749 22:43:40 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n1 00:04:42.749 22:43:40 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme2n1 00:04:42.749 22:43:40 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:04:42.749 22:43:40 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:42.749 22:43:40 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:42.749 22:43:40 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3n1 00:04:42.749 22:43:40 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme3n1 00:04:42.749 22:43:40 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:04:42.749 22:43:40 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:42.749 22:43:40 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:04:42.749 22:43:40 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:04:42.749 22:43:40 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:42.749 22:43:40 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:42.749 22:43:40 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:42.749 22:43:40 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:42.749 22:43:40 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:42.749 22:43:40 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:42.749 22:43:40 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:dd:00.0 00:04:42.749 22:43:40 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\d\d\:\0\0\.\0* ]] 00:04:42.749 22:43:40 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:42.749 22:43:40 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:04:42.749 22:43:40 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:04:42.749 No valid GPT data, bailing 00:04:42.749 22:43:40 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:42.749 22:43:40 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:42.749 22:43:40 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:42.749 22:43:40 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:42.749 22:43:40 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:42.749 22:43:40 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:42.749 22:43:40 setup.sh.devices -- setup/common.sh@80 -- # echo 2000398934016 00:04:42.749 22:43:40 setup.sh.devices -- setup/devices.sh@204 -- # (( 2000398934016 >= min_disk_size )) 00:04:42.749 22:43:40 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:42.749 22:43:40 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:dd:00.0 00:04:42.749 22:43:40 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:42.749 22:43:40 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:04:42.749 22:43:40 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1 00:04:42.749 22:43:40 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:df:00.0 00:04:42.749 22:43:40 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\d\f\:\0\0\.\0* ]] 00:04:42.749 22:43:40 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:04:42.749 22:43:40 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:04:42.749 22:43:40 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/spdk-gpt.py nvme1n1 00:04:43.009 No valid GPT data, bailing 00:04:43.009 22:43:40 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:43.009 22:43:40 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:43.009 22:43:40 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:43.009 22:43:40 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:04:43.009 22:43:40 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme1n1 00:04:43.009 22:43:41 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:04:43.009 22:43:41 setup.sh.devices -- setup/common.sh@80 -- # echo 2000398934016 00:04:43.009 22:43:41 setup.sh.devices -- setup/devices.sh@204 -- # (( 2000398934016 >= min_disk_size )) 00:04:43.009 22:43:41 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:43.009 22:43:41 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:df:00.0 00:04:43.009 22:43:41 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:43.009 22:43:41 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2n1 00:04:43.009 22:43:41 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2 00:04:43.009 22:43:41 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:de:00.0 00:04:43.009 22:43:41 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\d\e\:\0\0\.\0* ]] 00:04:43.009 22:43:41 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme2n1 00:04:43.009 22:43:41 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme2n1 pt 00:04:43.009 22:43:41 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/spdk-gpt.py nvme2n1 00:04:43.009 No valid GPT data, bailing 00:04:43.009 22:43:41 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:04:43.009 22:43:41 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:43.009 22:43:41 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:43.009 22:43:41 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme2n1 00:04:43.009 22:43:41 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme2n1 00:04:43.009 22:43:41 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme2n1 ]] 00:04:43.009 22:43:41 setup.sh.devices -- setup/common.sh@80 -- # echo 400088457216 00:04:43.009 22:43:41 setup.sh.devices -- setup/devices.sh@204 -- # (( 400088457216 >= min_disk_size )) 00:04:43.009 22:43:41 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:43.009 22:43:41 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:de:00.0 00:04:43.009 22:43:41 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:43.009 22:43:41 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme3n1 00:04:43.009 22:43:41 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme3 00:04:43.009 22:43:41 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:dc:00.0 00:04:43.009 22:43:41 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\d\c\:\0\0\.\0* ]] 00:04:43.009 22:43:41 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme3n1 00:04:43.009 22:43:41 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme3n1 pt 00:04:43.009 22:43:41 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/spdk-gpt.py nvme3n1 00:04:43.009 No valid GPT data, bailing 00:04:43.009 22:43:41 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:04:43.009 22:43:41 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:43.009 22:43:41 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:43.009 22:43:41 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme3n1 00:04:43.009 22:43:41 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme3n1 00:04:43.009 22:43:41 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme3n1 ]] 00:04:43.009 22:43:41 setup.sh.devices -- setup/common.sh@80 -- # echo 400088457216 00:04:43.009 22:43:41 setup.sh.devices -- setup/devices.sh@204 -- # (( 400088457216 >= min_disk_size )) 00:04:43.009 22:43:41 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:43.009 22:43:41 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:dc:00.0 00:04:43.009 22:43:41 setup.sh.devices -- setup/devices.sh@209 -- # (( 4 > 0 )) 00:04:43.009 22:43:41 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:43.009 22:43:41 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:43.009 22:43:41 setup.sh.devices -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:43.009 22:43:41 setup.sh.devices -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:43.009 22:43:41 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:43.009 ************************************ 00:04:43.009 START TEST nvme_mount 00:04:43.009 ************************************ 00:04:43.009 22:43:41 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1125 -- # nvme_mount 00:04:43.009 22:43:41 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:43.009 22:43:41 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:43.009 22:43:41 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:43.009 22:43:41 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:43.009 22:43:41 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:43.009 22:43:41 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:43.009 22:43:41 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:04:43.009 22:43:41 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:43.009 22:43:41 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:43.009 22:43:41 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:04:43.009 22:43:41 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:04:43.009 22:43:41 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:43.009 22:43:41 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:43.009 22:43:41 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:43.009 22:43:41 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:43.009 22:43:41 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:43.009 22:43:41 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:43.009 22:43:41 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:43.009 22:43:41 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:44.387 Creating new GPT entries in memory. 00:04:44.387 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:44.387 other utilities. 00:04:44.387 22:43:42 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:44.387 22:43:42 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:44.387 22:43:42 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:44.387 22:43:42 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:44.387 22:43:42 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:45.324 Creating new GPT entries in memory. 00:04:45.324 The operation has completed successfully. 00:04:45.324 22:43:43 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:45.324 22:43:43 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:45.324 22:43:43 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 443315 00:04:45.324 22:43:43 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:45.324 22:43:43 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount size= 00:04:45.324 22:43:43 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:45.324 22:43:43 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:45.324 22:43:43 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:45.324 22:43:43 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:45.324 22:43:43 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:dd:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:45.324 22:43:43 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:dd:00.0 00:04:45.324 22:43:43 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:45.324 22:43:43 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:45.324 22:43:43 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:45.324 22:43:43 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:45.324 22:43:43 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:45.324 22:43:43 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:45.324 22:43:43 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:45.324 22:43:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.324 22:43:43 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:dd:00.0 00:04:45.324 22:43:43 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:45.324 22:43:43 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:45.324 22:43:43 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:04:48.610 22:43:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:dd:00.0 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:04:48.610 22:43:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:48.610 22:43:46 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:48.610 22:43:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.610 22:43:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:df:00.0 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:04:48.610 22:43:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.610 22:43:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:de:00.0 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:04:48.610 22:43:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.610 22:43:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:04:48.610 22:43:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.610 22:43:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:04:48.610 22:43:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.610 22:43:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:04:48.610 22:43:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.610 22:43:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:04:48.610 22:43:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.610 22:43:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:04:48.610 22:43:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.610 22:43:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:04:48.610 22:43:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.610 22:43:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:04:48.610 22:43:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.610 22:43:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:04:48.610 22:43:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.610 22:43:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:04:48.610 22:43:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.610 22:43:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:04:48.610 22:43:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.610 22:43:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:04:48.610 22:43:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.610 22:43:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:04:48.610 22:43:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.610 22:43:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:04:48.610 22:43:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.610 22:43:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:04:48.610 22:43:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.610 22:43:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:04:48.610 22:43:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.610 22:43:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:04:48.610 22:43:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.610 22:43:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:dc:00.0 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:04:48.610 22:43:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.986 22:43:48 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:49.986 22:43:48 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:49.986 22:43:48 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:49.986 22:43:48 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:49.986 22:43:48 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:49.986 22:43:48 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:04:49.986 22:43:48 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:50.243 22:43:48 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:50.243 22:43:48 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:50.243 22:43:48 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:50.243 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:50.243 22:43:48 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:50.244 22:43:48 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:50.502 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:50.502 /dev/nvme0n1: 8 bytes were erased at offset 0x1d1c1115e00 (gpt): 45 46 49 20 50 41 52 54 00:04:50.502 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:50.502 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:50.502 22:43:48 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:04:50.502 22:43:48 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:04:50.502 22:43:48 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:50.502 22:43:48 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:50.502 22:43:48 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:50.502 22:43:48 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:50.502 22:43:48 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:dd:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:50.502 22:43:48 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:dd:00.0 00:04:50.502 22:43:48 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:50.502 22:43:48 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:50.502 22:43:48 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:50.502 22:43:48 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:50.502 22:43:48 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:50.502 22:43:48 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:50.502 22:43:48 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:50.502 22:43:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.502 22:43:48 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:dd:00.0 00:04:50.502 22:43:48 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:50.502 22:43:48 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:50.502 22:43:48 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:04:53.790 22:43:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:dd:00.0 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:04:53.790 22:43:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:53.790 22:43:51 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:53.790 22:43:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.790 22:43:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:df:00.0 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:04:53.790 22:43:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.790 22:43:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:de:00.0 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:04:53.790 22:43:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.790 22:43:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:04:53.790 22:43:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.790 22:43:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:04:53.790 22:43:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.790 22:43:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:04:53.790 22:43:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.790 22:43:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:04:53.790 22:43:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.790 22:43:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:04:53.790 22:43:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.790 22:43:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:04:53.790 22:43:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.790 22:43:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:04:53.790 22:43:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.790 22:43:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:04:53.790 22:43:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.790 22:43:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:04:53.790 22:43:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.790 22:43:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:04:53.790 22:43:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.790 22:43:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:04:53.790 22:43:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.790 22:43:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:04:53.790 22:43:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.790 22:43:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:04:53.790 22:43:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.790 22:43:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:04:53.790 22:43:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.790 22:43:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:04:53.790 22:43:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.790 22:43:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:04:53.790 22:43:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.790 22:43:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:dc:00.0 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:04:53.790 22:43:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.691 22:43:53 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:55.691 22:43:53 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:55.691 22:43:53 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:55.691 22:43:53 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:55.691 22:43:53 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:55.691 22:43:53 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:55.691 22:43:53 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:dd:00.0 data@nvme0n1 '' '' 00:04:55.691 22:43:53 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:dd:00.0 00:04:55.691 22:43:53 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:55.691 22:43:53 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:55.691 22:43:53 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:04:55.691 22:43:53 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:55.691 22:43:53 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:55.691 22:43:53 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:55.691 22:43:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.691 22:43:53 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:dd:00.0 00:04:55.691 22:43:53 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:55.691 22:43:53 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:55.691 22:43:53 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:04:58.225 22:43:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:dd:00.0 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:04:58.225 22:43:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:58.225 22:43:56 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:58.225 22:43:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.225 22:43:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:df:00.0 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:04:58.225 22:43:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.484 22:43:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:de:00.0 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:04:58.484 22:43:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.484 22:43:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:04:58.484 22:43:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.484 22:43:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:04:58.484 22:43:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.484 22:43:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:04:58.484 22:43:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.484 22:43:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:04:58.484 22:43:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.484 22:43:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:04:58.484 22:43:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.484 22:43:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:04:58.484 22:43:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.484 22:43:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:04:58.484 22:43:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.484 22:43:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:04:58.484 22:43:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.484 22:43:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:04:58.484 22:43:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.484 22:43:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:04:58.484 22:43:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.484 22:43:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:04:58.484 22:43:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.484 22:43:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:04:58.484 22:43:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.484 22:43:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:04:58.484 22:43:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.484 22:43:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:04:58.484 22:43:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.484 22:43:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:04:58.484 22:43:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.484 22:43:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:04:58.484 22:43:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.484 22:43:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:dc:00.0 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:04:58.484 22:43:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.386 22:43:58 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:00.386 22:43:58 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:00.386 22:43:58 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:05:00.386 22:43:58 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:05:00.386 22:43:58 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:05:00.386 22:43:58 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:00.386 22:43:58 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:00.386 22:43:58 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:00.386 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:00.386 00:05:00.386 real 0m17.100s 00:05:00.386 user 0m5.386s 00:05:00.386 sys 0m9.420s 00:05:00.386 22:43:58 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:00.386 22:43:58 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:05:00.386 ************************************ 00:05:00.386 END TEST nvme_mount 00:05:00.386 ************************************ 00:05:00.386 22:43:58 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:05:00.386 22:43:58 setup.sh.devices -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:00.386 22:43:58 setup.sh.devices -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:00.386 22:43:58 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:00.386 ************************************ 00:05:00.386 START TEST dm_mount 00:05:00.386 ************************************ 00:05:00.386 22:43:58 setup.sh.devices.dm_mount -- common/autotest_common.sh@1125 -- # dm_mount 00:05:00.386 22:43:58 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:05:00.386 22:43:58 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:05:00.386 22:43:58 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:05:00.386 22:43:58 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:05:00.386 22:43:58 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:00.386 22:43:58 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:05:00.386 22:43:58 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:05:00.386 22:43:58 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:00.386 22:43:58 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:05:00.386 22:43:58 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:05:00.386 22:43:58 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:05:00.386 22:43:58 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:00.386 22:43:58 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:00.386 22:43:58 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:00.386 22:43:58 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:00.386 22:43:58 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:00.386 22:43:58 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:00.386 22:43:58 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:00.386 22:43:58 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:05:00.386 22:43:58 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:00.386 22:43:58 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:05:01.322 Creating new GPT entries in memory. 00:05:01.322 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:01.322 other utilities. 00:05:01.322 22:43:59 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:05:01.322 22:43:59 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:01.322 22:43:59 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:01.322 22:43:59 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:01.322 22:43:59 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:05:02.257 Creating new GPT entries in memory. 00:05:02.257 The operation has completed successfully. 00:05:02.257 22:44:00 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:02.257 22:44:00 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:02.257 22:44:00 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:02.257 22:44:00 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:02.257 22:44:00 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:05:03.191 The operation has completed successfully. 00:05:03.191 22:44:01 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:03.191 22:44:01 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:03.191 22:44:01 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 448847 00:05:03.449 22:44:01 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:05:03.449 22:44:01 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:05:03.449 22:44:01 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:03.449 22:44:01 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:05:03.449 22:44:01 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:05:03.449 22:44:01 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:03.449 22:44:01 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:05:03.449 22:44:01 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:03.450 22:44:01 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:05:03.450 22:44:01 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-2 00:05:03.450 22:44:01 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-2 00:05:03.450 22:44:01 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-2 ]] 00:05:03.450 22:44:01 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-2 ]] 00:05:03.450 22:44:01 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:05:03.450 22:44:01 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount size= 00:05:03.450 22:44:01 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:05:03.450 22:44:01 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:03.450 22:44:01 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:05:03.450 22:44:01 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:05:03.450 22:44:01 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:dd:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:03.450 22:44:01 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:dd:00.0 00:05:03.450 22:44:01 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:05:03.450 22:44:01 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:05:03.450 22:44:01 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:03.450 22:44:01 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:03.450 22:44:01 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:03.450 22:44:01 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:05:03.450 22:44:01 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:03.450 22:44:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.450 22:44:01 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:dd:00.0 00:05:03.450 22:44:01 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:03.450 22:44:01 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:03.450 22:44:01 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:05:06.732 22:44:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:dd:00.0 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:05:06.732 22:44:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:05:06.732 22:44:04 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:06.732 22:44:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.732 22:44:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:df:00.0 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:05:06.732 22:44:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.732 22:44:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:de:00.0 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:05:06.732 22:44:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.732 22:44:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:05:06.732 22:44:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.732 22:44:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:05:06.732 22:44:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.732 22:44:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:05:06.732 22:44:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.732 22:44:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:05:06.732 22:44:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.732 22:44:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:05:06.732 22:44:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.732 22:44:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:05:06.732 22:44:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.732 22:44:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:05:06.732 22:44:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.732 22:44:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:05:06.732 22:44:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.732 22:44:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:05:06.732 22:44:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.732 22:44:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:05:06.732 22:44:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.732 22:44:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:05:06.732 22:44:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.732 22:44:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:05:06.733 22:44:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.733 22:44:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:05:06.733 22:44:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.733 22:44:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:05:06.733 22:44:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.733 22:44:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:05:06.733 22:44:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.733 22:44:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:05:06.733 22:44:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.733 22:44:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:dc:00.0 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:05:06.733 22:44:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:08.636 22:44:06 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:08.636 22:44:06 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount ]] 00:05:08.636 22:44:06 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:05:08.636 22:44:06 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:08.636 22:44:06 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:08.636 22:44:06 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:05:08.637 22:44:06 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:dd:00.0 holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2 '' '' 00:05:08.637 22:44:06 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:dd:00.0 00:05:08.637 22:44:06 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2 00:05:08.637 22:44:06 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:08.637 22:44:06 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:05:08.637 22:44:06 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:08.637 22:44:06 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:08.637 22:44:06 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:08.637 22:44:06 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:dd:00.0 00:05:08.637 22:44:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:08.637 22:44:06 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:08.637 22:44:06 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:08.637 22:44:06 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:05:11.925 22:44:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:dd:00.0 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:05:11.925 22:44:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\2\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\2* ]] 00:05:11.925 22:44:09 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:11.925 22:44:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:11.925 22:44:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:df:00.0 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:05:11.925 22:44:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:11.925 22:44:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:de:00.0 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:05:11.925 22:44:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:11.925 22:44:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:05:11.925 22:44:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:11.925 22:44:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:05:11.925 22:44:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:11.925 22:44:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:05:11.925 22:44:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:11.925 22:44:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:05:11.925 22:44:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:11.925 22:44:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:05:11.925 22:44:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:11.925 22:44:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:05:11.925 22:44:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:11.925 22:44:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:05:11.925 22:44:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:11.925 22:44:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:05:11.925 22:44:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:11.925 22:44:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:05:11.925 22:44:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:11.925 22:44:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:05:11.925 22:44:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:11.925 22:44:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:05:11.925 22:44:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:11.925 22:44:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:05:11.925 22:44:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:11.925 22:44:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:05:11.925 22:44:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:11.925 22:44:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:05:11.925 22:44:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:11.925 22:44:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:05:11.925 22:44:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:11.925 22:44:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:05:11.925 22:44:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:11.925 22:44:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:dc:00.0 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:05:11.925 22:44:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:13.311 22:44:11 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:13.311 22:44:11 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:13.311 22:44:11 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:05:13.311 22:44:11 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:05:13.311 22:44:11 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:05:13.311 22:44:11 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:13.311 22:44:11 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:13.311 22:44:11 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:13.311 22:44:11 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:05:13.570 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:13.570 22:44:11 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:13.570 22:44:11 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:05:13.570 00:05:13.570 real 0m13.203s 00:05:13.570 user 0m3.723s 00:05:13.570 sys 0m6.211s 00:05:13.570 22:44:11 setup.sh.devices.dm_mount -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:13.570 22:44:11 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:05:13.570 ************************************ 00:05:13.570 END TEST dm_mount 00:05:13.570 ************************************ 00:05:13.570 22:44:11 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:05:13.570 22:44:11 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:05:13.570 22:44:11 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:05:13.570 22:44:11 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:13.570 22:44:11 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:13.570 22:44:11 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:13.570 22:44:11 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:13.830 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:05:13.830 /dev/nvme0n1: 8 bytes were erased at offset 0x1d1c1115e00 (gpt): 45 46 49 20 50 41 52 54 00:05:13.830 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:13.830 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:13.830 22:44:11 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:05:13.830 22:44:11 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:05:13.830 22:44:11 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:13.830 22:44:11 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:13.830 22:44:11 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:13.830 22:44:11 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:05:13.830 22:44:11 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:05:13.830 00:05:13.830 real 0m36.320s 00:05:13.830 user 0m11.320s 00:05:13.830 sys 0m19.280s 00:05:13.830 22:44:11 setup.sh.devices -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:13.830 22:44:11 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:13.830 ************************************ 00:05:13.830 END TEST devices 00:05:13.830 ************************************ 00:05:13.830 00:05:13.830 real 2m24.371s 00:05:13.830 user 0m41.633s 00:05:13.830 sys 1m11.219s 00:05:13.830 22:44:11 setup.sh -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:13.830 22:44:11 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:13.830 ************************************ 00:05:13.830 END TEST setup.sh 00:05:13.830 ************************************ 00:05:13.830 22:44:11 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh status 00:05:18.023 Hugepages 00:05:18.023 node hugesize free / total 00:05:18.023 node0 1048576kB 0 / 0 00:05:18.023 node0 2048kB 2048 / 2048 00:05:18.023 node1 1048576kB 0 / 0 00:05:18.023 node1 2048kB 0 / 0 00:05:18.023 00:05:18.023 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:18.023 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:05:18.023 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:05:18.023 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:05:18.023 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:05:18.023 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:05:18.023 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:05:18.023 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:05:18.023 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:05:18.023 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:05:18.023 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:05:18.023 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:05:18.023 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:05:18.023 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:05:18.023 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:05:18.023 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:05:18.023 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:05:18.023 NVMe 0000:dc:00.0 8086 0953 1 nvme nvme3 nvme3n1 00:05:18.023 NVMe 0000:dd:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:05:18.023 NVMe 0000:de:00.0 8086 0953 1 nvme nvme2 nvme2n1 00:05:18.023 NVMe 0000:df:00.0 8086 0a54 1 nvme nvme1 nvme1n1 00:05:18.023 22:44:15 -- spdk/autotest.sh@130 -- # uname -s 00:05:18.023 22:44:15 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:05:18.023 22:44:15 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:05:18.023 22:44:15 -- common/autotest_common.sh@1531 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:05:21.314 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:21.314 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:21.314 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:21.314 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:21.314 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:21.314 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:21.314 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:21.314 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:21.314 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:21.314 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:21.314 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:21.314 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:21.314 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:21.314 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:21.314 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:21.314 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:23.221 0000:dd:00.0 (8086 0a54): nvme -> vfio-pci 00:05:23.221 0000:df:00.0 (8086 0a54): nvme -> vfio-pci 00:05:23.221 0000:de:00.0 (8086 0953): nvme -> vfio-pci 00:05:23.481 0000:dc:00.0 (8086 0953): nvme -> vfio-pci 00:05:25.386 22:44:23 -- common/autotest_common.sh@1532 -- # sleep 1 00:05:25.964 22:44:24 -- common/autotest_common.sh@1533 -- # bdfs=() 00:05:25.964 22:44:24 -- common/autotest_common.sh@1533 -- # local bdfs 00:05:25.964 22:44:24 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:05:25.964 22:44:24 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:05:25.964 22:44:24 -- common/autotest_common.sh@1513 -- # bdfs=() 00:05:25.964 22:44:24 -- common/autotest_common.sh@1513 -- # local bdfs 00:05:25.964 22:44:24 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:25.964 22:44:24 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:25.964 22:44:24 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:05:26.223 22:44:24 -- common/autotest_common.sh@1515 -- # (( 4 == 0 )) 00:05:26.223 22:44:24 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:dc:00.0 0000:dd:00.0 0000:de:00.0 0000:df:00.0 00:05:26.223 22:44:24 -- common/autotest_common.sh@1536 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:05:29.514 Waiting for block devices as requested 00:05:29.514 0000:dd:00.0 (8086 0a54): vfio-pci -> nvme 00:05:29.514 0000:df:00.0 (8086 0a54): vfio-pci -> nvme 00:05:29.773 0000:de:00.0 (8086 0953): vfio-pci -> nvme 00:05:32.308 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:05:32.308 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:05:32.308 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:05:32.567 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:05:32.567 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:05:32.567 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:05:32.826 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:05:32.826 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:05:32.826 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:05:33.085 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:05:33.085 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:05:33.085 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:05:33.085 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:05:33.344 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:05:33.344 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:05:33.344 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:05:33.603 0000:dc:00.0 (8086 0953): vfio-pci -> nvme 00:05:37.792 22:44:35 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:05:37.792 22:44:35 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:dc:00.0 00:05:37.792 22:44:35 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:37.792 22:44:35 -- common/autotest_common.sh@1502 -- # grep 0000:dc:00.0/nvme/nvme 00:05:37.792 22:44:35 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:d7/0000:d7:02.0/0000:da:00.0/0000:db:00.0/0000:dc:00.0/nvme/nvme3 00:05:37.792 22:44:35 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:d7/0000:d7:02.0/0000:da:00.0/0000:db:00.0/0000:dc:00.0/nvme/nvme3 ]] 00:05:37.792 22:44:35 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:d7/0000:d7:02.0/0000:da:00.0/0000:db:00.0/0000:dc:00.0/nvme/nvme3 00:05:37.792 22:44:35 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme3 00:05:37.792 22:44:35 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme3 00:05:37.792 22:44:35 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme3 ]] 00:05:37.792 22:44:35 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme3 00:05:37.792 22:44:35 -- common/autotest_common.sh@1545 -- # grep oacs 00:05:37.792 22:44:35 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:05:37.792 22:44:35 -- common/autotest_common.sh@1545 -- # oacs=' 0x6' 00:05:37.792 22:44:35 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=0 00:05:37.792 22:44:35 -- common/autotest_common.sh@1548 -- # [[ 0 -ne 0 ]] 00:05:37.792 22:44:35 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:05:37.792 22:44:35 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:dd:00.0 00:05:37.792 22:44:35 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:37.792 22:44:35 -- common/autotest_common.sh@1502 -- # grep 0000:dd:00.0/nvme/nvme 00:05:37.792 22:44:35 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:d7/0000:d7:02.0/0000:da:00.0/0000:db:01.0/0000:dd:00.0/nvme/nvme0 00:05:37.792 22:44:35 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:d7/0000:d7:02.0/0000:da:00.0/0000:db:01.0/0000:dd:00.0/nvme/nvme0 ]] 00:05:37.792 22:44:35 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:d7/0000:d7:02.0/0000:da:00.0/0000:db:01.0/0000:dd:00.0/nvme/nvme0 00:05:37.792 22:44:35 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:05:37.792 22:44:35 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:05:37.792 22:44:35 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:05:37.792 22:44:35 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:05:37.792 22:44:35 -- common/autotest_common.sh@1545 -- # grep oacs 00:05:37.792 22:44:35 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:05:37.792 22:44:35 -- common/autotest_common.sh@1545 -- # oacs=' 0xe' 00:05:37.792 22:44:35 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:05:37.792 22:44:35 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:05:37.792 22:44:35 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:05:37.792 22:44:35 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:05:37.792 22:44:35 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:05:37.792 22:44:35 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:05:37.792 22:44:35 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:05:37.792 22:44:35 -- common/autotest_common.sh@1557 -- # continue 00:05:37.792 22:44:35 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:05:37.792 22:44:35 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:de:00.0 00:05:37.792 22:44:35 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:37.792 22:44:35 -- common/autotest_common.sh@1502 -- # grep 0000:de:00.0/nvme/nvme 00:05:37.792 22:44:35 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:d7/0000:d7:02.0/0000:da:00.0/0000:db:02.0/0000:de:00.0/nvme/nvme2 00:05:37.792 22:44:35 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:d7/0000:d7:02.0/0000:da:00.0/0000:db:02.0/0000:de:00.0/nvme/nvme2 ]] 00:05:37.792 22:44:35 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:d7/0000:d7:02.0/0000:da:00.0/0000:db:02.0/0000:de:00.0/nvme/nvme2 00:05:37.792 22:44:35 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme2 00:05:37.792 22:44:35 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme2 00:05:37.792 22:44:35 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme2 ]] 00:05:37.792 22:44:35 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme2 00:05:37.792 22:44:35 -- common/autotest_common.sh@1545 -- # grep oacs 00:05:37.792 22:44:35 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:05:37.792 22:44:35 -- common/autotest_common.sh@1545 -- # oacs=' 0x6' 00:05:37.792 22:44:35 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=0 00:05:37.792 22:44:35 -- common/autotest_common.sh@1548 -- # [[ 0 -ne 0 ]] 00:05:37.792 22:44:35 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:05:37.792 22:44:35 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:df:00.0 00:05:37.792 22:44:35 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:37.792 22:44:35 -- common/autotest_common.sh@1502 -- # grep 0000:df:00.0/nvme/nvme 00:05:37.793 22:44:35 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:d7/0000:d7:02.0/0000:da:00.0/0000:db:03.0/0000:df:00.0/nvme/nvme1 00:05:37.793 22:44:35 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:d7/0000:d7:02.0/0000:da:00.0/0000:db:03.0/0000:df:00.0/nvme/nvme1 ]] 00:05:37.793 22:44:35 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:d7/0000:d7:02.0/0000:da:00.0/0000:db:03.0/0000:df:00.0/nvme/nvme1 00:05:37.793 22:44:35 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme1 00:05:37.793 22:44:35 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme1 00:05:37.793 22:44:35 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme1 ]] 00:05:37.793 22:44:35 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme1 00:05:37.793 22:44:35 -- common/autotest_common.sh@1545 -- # grep oacs 00:05:37.793 22:44:35 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:05:37.793 22:44:35 -- common/autotest_common.sh@1545 -- # oacs=' 0xe' 00:05:37.793 22:44:35 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:05:37.793 22:44:35 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:05:37.793 22:44:35 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme1 00:05:37.793 22:44:35 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:05:37.793 22:44:35 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:05:37.793 22:44:35 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:05:37.793 22:44:35 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:05:37.793 22:44:35 -- common/autotest_common.sh@1557 -- # continue 00:05:37.793 22:44:35 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:05:37.793 22:44:35 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:37.793 22:44:35 -- common/autotest_common.sh@10 -- # set +x 00:05:37.793 22:44:35 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:05:37.793 22:44:35 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:37.793 22:44:35 -- common/autotest_common.sh@10 -- # set +x 00:05:37.793 22:44:35 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:05:41.084 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:41.084 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:41.343 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:41.343 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:41.343 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:41.343 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:41.343 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:41.343 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:41.343 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:41.343 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:41.343 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:41.343 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:41.343 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:41.343 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:41.343 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:41.343 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:43.249 0000:df:00.0 (8086 0a54): nvme -> vfio-pci 00:05:43.249 0000:dd:00.0 (8086 0a54): nvme -> vfio-pci 00:05:43.249 0000:de:00.0 (8086 0953): nvme -> vfio-pci 00:05:43.508 0000:dc:00.0 (8086 0953): nvme -> vfio-pci 00:05:44.881 22:44:43 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:05:44.881 22:44:43 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:44.881 22:44:43 -- common/autotest_common.sh@10 -- # set +x 00:05:45.140 22:44:43 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:05:45.140 22:44:43 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:05:45.140 22:44:43 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:05:45.140 22:44:43 -- common/autotest_common.sh@1577 -- # bdfs=() 00:05:45.140 22:44:43 -- common/autotest_common.sh@1577 -- # local bdfs 00:05:45.140 22:44:43 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:05:45.140 22:44:43 -- common/autotest_common.sh@1513 -- # bdfs=() 00:05:45.140 22:44:43 -- common/autotest_common.sh@1513 -- # local bdfs 00:05:45.140 22:44:43 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:45.140 22:44:43 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:45.140 22:44:43 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:05:45.140 22:44:43 -- common/autotest_common.sh@1515 -- # (( 4 == 0 )) 00:05:45.140 22:44:43 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:dc:00.0 0000:dd:00.0 0000:de:00.0 0000:df:00.0 00:05:45.140 22:44:43 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:05:45.140 22:44:43 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:dc:00.0/device 00:05:45.140 22:44:43 -- common/autotest_common.sh@1580 -- # device=0x0953 00:05:45.140 22:44:43 -- common/autotest_common.sh@1581 -- # [[ 0x0953 == \0\x\0\a\5\4 ]] 00:05:45.140 22:44:43 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:05:45.140 22:44:43 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:dd:00.0/device 00:05:45.140 22:44:43 -- common/autotest_common.sh@1580 -- # device=0x0a54 00:05:45.140 22:44:43 -- common/autotest_common.sh@1581 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:05:45.140 22:44:43 -- common/autotest_common.sh@1582 -- # bdfs+=($bdf) 00:05:45.140 22:44:43 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:05:45.140 22:44:43 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:de:00.0/device 00:05:45.140 22:44:43 -- common/autotest_common.sh@1580 -- # device=0x0953 00:05:45.140 22:44:43 -- common/autotest_common.sh@1581 -- # [[ 0x0953 == \0\x\0\a\5\4 ]] 00:05:45.141 22:44:43 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:05:45.141 22:44:43 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:df:00.0/device 00:05:45.141 22:44:43 -- common/autotest_common.sh@1580 -- # device=0x0a54 00:05:45.141 22:44:43 -- common/autotest_common.sh@1581 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:05:45.141 22:44:43 -- common/autotest_common.sh@1582 -- # bdfs+=($bdf) 00:05:45.141 22:44:43 -- common/autotest_common.sh@1586 -- # printf '%s\n' 0000:dd:00.0 0000:df:00.0 00:05:45.141 22:44:43 -- common/autotest_common.sh@1592 -- # [[ -z 0000:dd:00.0 ]] 00:05:45.141 22:44:43 -- common/autotest_common.sh@1597 -- # spdk_tgt_pid=461586 00:05:45.141 22:44:43 -- common/autotest_common.sh@1598 -- # waitforlisten 461586 00:05:45.141 22:44:43 -- common/autotest_common.sh@1596 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:45.141 22:44:43 -- common/autotest_common.sh@831 -- # '[' -z 461586 ']' 00:05:45.141 22:44:43 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:45.141 22:44:43 -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:45.141 22:44:43 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:45.141 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:45.141 22:44:43 -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:45.141 22:44:43 -- common/autotest_common.sh@10 -- # set +x 00:05:45.141 [2024-07-24 22:44:43.269708] Starting SPDK v24.09-pre git sha1 f41dbc235 / DPDK 24.03.0 initialization... 00:05:45.141 [2024-07-24 22:44:43.269786] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid461586 ] 00:05:45.141 EAL: No free 2048 kB hugepages reported on node 1 00:05:45.141 [2024-07-24 22:44:43.343873] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.400 [2024-07-24 22:44:43.420978] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.967 22:44:44 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:45.967 22:44:44 -- common/autotest_common.sh@864 -- # return 0 00:05:45.967 22:44:44 -- common/autotest_common.sh@1600 -- # bdf_id=0 00:05:45.967 22:44:44 -- common/autotest_common.sh@1601 -- # for bdf in "${bdfs[@]}" 00:05:45.967 22:44:44 -- common/autotest_common.sh@1602 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:dd:00.0 00:05:49.258 nvme0n1 00:05:49.258 22:44:47 -- common/autotest_common.sh@1604 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:05:49.258 [2024-07-24 22:44:47.270792] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:05:49.258 request: 00:05:49.258 { 00:05:49.258 "nvme_ctrlr_name": "nvme0", 00:05:49.258 "password": "test", 00:05:49.258 "method": "bdev_nvme_opal_revert", 00:05:49.258 "req_id": 1 00:05:49.258 } 00:05:49.258 Got JSON-RPC error response 00:05:49.258 response: 00:05:49.258 { 00:05:49.258 "code": -32602, 00:05:49.258 "message": "Invalid parameters" 00:05:49.258 } 00:05:49.258 22:44:47 -- common/autotest_common.sh@1604 -- # true 00:05:49.258 22:44:47 -- common/autotest_common.sh@1605 -- # (( ++bdf_id )) 00:05:49.258 22:44:47 -- common/autotest_common.sh@1601 -- # for bdf in "${bdfs[@]}" 00:05:49.258 22:44:47 -- common/autotest_common.sh@1602 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme1 -t pcie -a 0000:df:00.0 00:05:52.594 nvme1n1 00:05:52.594 22:44:50 -- common/autotest_common.sh@1604 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme1 -p test 00:05:52.594 [2024-07-24 22:44:50.461760] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme1 not support opal 00:05:52.594 request: 00:05:52.594 { 00:05:52.594 "nvme_ctrlr_name": "nvme1", 00:05:52.594 "password": "test", 00:05:52.594 "method": "bdev_nvme_opal_revert", 00:05:52.594 "req_id": 1 00:05:52.594 } 00:05:52.594 Got JSON-RPC error response 00:05:52.594 response: 00:05:52.594 { 00:05:52.594 "code": -32602, 00:05:52.594 "message": "Invalid parameters" 00:05:52.594 } 00:05:52.594 22:44:50 -- common/autotest_common.sh@1604 -- # true 00:05:52.594 22:44:50 -- common/autotest_common.sh@1605 -- # (( ++bdf_id )) 00:05:52.594 22:44:50 -- common/autotest_common.sh@1608 -- # killprocess 461586 00:05:52.594 22:44:50 -- common/autotest_common.sh@950 -- # '[' -z 461586 ']' 00:05:52.594 22:44:50 -- common/autotest_common.sh@954 -- # kill -0 461586 00:05:52.594 22:44:50 -- common/autotest_common.sh@955 -- # uname 00:05:52.594 22:44:50 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:52.594 22:44:50 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 461586 00:05:52.594 22:44:50 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:52.594 22:44:50 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:52.594 22:44:50 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 461586' 00:05:52.594 killing process with pid 461586 00:05:52.594 22:44:50 -- common/autotest_common.sh@969 -- # kill 461586 00:05:52.594 22:44:50 -- common/autotest_common.sh@974 -- # wait 461586 00:05:55.882 22:44:53 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:05:55.882 22:44:53 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:05:55.882 22:44:53 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:55.882 22:44:53 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:55.882 22:44:53 -- spdk/autotest.sh@162 -- # timing_enter lib 00:05:55.882 22:44:53 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:55.882 22:44:53 -- common/autotest_common.sh@10 -- # set +x 00:05:55.882 22:44:53 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:05:55.882 22:44:53 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env.sh 00:05:55.882 22:44:53 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:55.882 22:44:53 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:55.882 22:44:53 -- common/autotest_common.sh@10 -- # set +x 00:05:55.882 ************************************ 00:05:55.882 START TEST env 00:05:55.883 ************************************ 00:05:55.883 22:44:53 env -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env.sh 00:05:55.883 * Looking for test storage... 00:05:55.883 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env 00:05:55.883 22:44:53 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/memory/memory_ut 00:05:55.883 22:44:53 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:55.883 22:44:53 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:55.883 22:44:53 env -- common/autotest_common.sh@10 -- # set +x 00:05:55.883 ************************************ 00:05:55.883 START TEST env_memory 00:05:55.883 ************************************ 00:05:55.883 22:44:53 env.env_memory -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/memory/memory_ut 00:05:55.883 00:05:55.883 00:05:55.883 CUnit - A unit testing framework for C - Version 2.1-3 00:05:55.883 http://cunit.sourceforge.net/ 00:05:55.883 00:05:55.883 00:05:55.883 Suite: memory 00:05:55.883 Test: alloc and free memory map ...[2024-07-24 22:44:53.656839] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:55.883 passed 00:05:55.883 Test: mem map translation ...[2024-07-24 22:44:53.670482] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 591:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:55.883 [2024-07-24 22:44:53.670496] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 591:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:55.883 [2024-07-24 22:44:53.670528] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:55.883 [2024-07-24 22:44:53.670535] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:55.883 passed 00:05:55.883 Test: mem map registration ...[2024-07-24 22:44:53.693155] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:55.883 [2024-07-24 22:44:53.693169] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:55.883 passed 00:05:55.883 Test: mem map adjacent registrations ...passed 00:05:55.883 00:05:55.883 Run Summary: Type Total Ran Passed Failed Inactive 00:05:55.883 suites 1 1 n/a 0 0 00:05:55.883 tests 4 4 4 0 0 00:05:55.883 asserts 152 152 152 0 n/a 00:05:55.883 00:05:55.883 Elapsed time = 0.091 seconds 00:05:55.883 00:05:55.883 real 0m0.102s 00:05:55.883 user 0m0.089s 00:05:55.883 sys 0m0.012s 00:05:55.883 22:44:53 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:55.883 22:44:53 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:55.883 ************************************ 00:05:55.883 END TEST env_memory 00:05:55.883 ************************************ 00:05:55.883 22:44:53 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:55.883 22:44:53 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:55.883 22:44:53 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:55.883 22:44:53 env -- common/autotest_common.sh@10 -- # set +x 00:05:55.883 ************************************ 00:05:55.883 START TEST env_vtophys 00:05:55.883 ************************************ 00:05:55.883 22:44:53 env.env_vtophys -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:55.883 EAL: lib.eal log level changed from notice to debug 00:05:55.883 EAL: Detected lcore 0 as core 0 on socket 0 00:05:55.883 EAL: Detected lcore 1 as core 1 on socket 0 00:05:55.883 EAL: Detected lcore 2 as core 2 on socket 0 00:05:55.883 EAL: Detected lcore 3 as core 3 on socket 0 00:05:55.883 EAL: Detected lcore 4 as core 4 on socket 0 00:05:55.883 EAL: Detected lcore 5 as core 5 on socket 0 00:05:55.883 EAL: Detected lcore 6 as core 8 on socket 0 00:05:55.883 EAL: Detected lcore 7 as core 9 on socket 0 00:05:55.883 EAL: Detected lcore 8 as core 10 on socket 0 00:05:55.883 EAL: Detected lcore 9 as core 11 on socket 0 00:05:55.883 EAL: Detected lcore 10 as core 12 on socket 0 00:05:55.883 EAL: Detected lcore 11 as core 16 on socket 0 00:05:55.883 EAL: Detected lcore 12 as core 17 on socket 0 00:05:55.883 EAL: Detected lcore 13 as core 18 on socket 0 00:05:55.883 EAL: Detected lcore 14 as core 19 on socket 0 00:05:55.883 EAL: Detected lcore 15 as core 20 on socket 0 00:05:55.883 EAL: Detected lcore 16 as core 21 on socket 0 00:05:55.883 EAL: Detected lcore 17 as core 24 on socket 0 00:05:55.883 EAL: Detected lcore 18 as core 25 on socket 0 00:05:55.883 EAL: Detected lcore 19 as core 26 on socket 0 00:05:55.883 EAL: Detected lcore 20 as core 27 on socket 0 00:05:55.883 EAL: Detected lcore 21 as core 28 on socket 0 00:05:55.883 EAL: Detected lcore 22 as core 0 on socket 1 00:05:55.883 EAL: Detected lcore 23 as core 1 on socket 1 00:05:55.883 EAL: Detected lcore 24 as core 2 on socket 1 00:05:55.883 EAL: Detected lcore 25 as core 3 on socket 1 00:05:55.883 EAL: Detected lcore 26 as core 4 on socket 1 00:05:55.883 EAL: Detected lcore 27 as core 5 on socket 1 00:05:55.883 EAL: Detected lcore 28 as core 8 on socket 1 00:05:55.883 EAL: Detected lcore 29 as core 9 on socket 1 00:05:55.883 EAL: Detected lcore 30 as core 10 on socket 1 00:05:55.883 EAL: Detected lcore 31 as core 11 on socket 1 00:05:55.883 EAL: Detected lcore 32 as core 12 on socket 1 00:05:55.883 EAL: Detected lcore 33 as core 16 on socket 1 00:05:55.883 EAL: Detected lcore 34 as core 17 on socket 1 00:05:55.883 EAL: Detected lcore 35 as core 18 on socket 1 00:05:55.883 EAL: Detected lcore 36 as core 19 on socket 1 00:05:55.883 EAL: Detected lcore 37 as core 20 on socket 1 00:05:55.883 EAL: Detected lcore 38 as core 21 on socket 1 00:05:55.883 EAL: Detected lcore 39 as core 24 on socket 1 00:05:55.883 EAL: Detected lcore 40 as core 25 on socket 1 00:05:55.883 EAL: Detected lcore 41 as core 26 on socket 1 00:05:55.883 EAL: Detected lcore 42 as core 27 on socket 1 00:05:55.883 EAL: Detected lcore 43 as core 28 on socket 1 00:05:55.883 EAL: Detected lcore 44 as core 0 on socket 0 00:05:55.883 EAL: Detected lcore 45 as core 1 on socket 0 00:05:55.883 EAL: Detected lcore 46 as core 2 on socket 0 00:05:55.883 EAL: Detected lcore 47 as core 3 on socket 0 00:05:55.883 EAL: Detected lcore 48 as core 4 on socket 0 00:05:55.883 EAL: Detected lcore 49 as core 5 on socket 0 00:05:55.883 EAL: Detected lcore 50 as core 8 on socket 0 00:05:55.883 EAL: Detected lcore 51 as core 9 on socket 0 00:05:55.883 EAL: Detected lcore 52 as core 10 on socket 0 00:05:55.883 EAL: Detected lcore 53 as core 11 on socket 0 00:05:55.883 EAL: Detected lcore 54 as core 12 on socket 0 00:05:55.883 EAL: Detected lcore 55 as core 16 on socket 0 00:05:55.883 EAL: Detected lcore 56 as core 17 on socket 0 00:05:55.883 EAL: Detected lcore 57 as core 18 on socket 0 00:05:55.883 EAL: Detected lcore 58 as core 19 on socket 0 00:05:55.883 EAL: Detected lcore 59 as core 20 on socket 0 00:05:55.883 EAL: Detected lcore 60 as core 21 on socket 0 00:05:55.883 EAL: Detected lcore 61 as core 24 on socket 0 00:05:55.883 EAL: Detected lcore 62 as core 25 on socket 0 00:05:55.883 EAL: Detected lcore 63 as core 26 on socket 0 00:05:55.883 EAL: Detected lcore 64 as core 27 on socket 0 00:05:55.883 EAL: Detected lcore 65 as core 28 on socket 0 00:05:55.883 EAL: Detected lcore 66 as core 0 on socket 1 00:05:55.883 EAL: Detected lcore 67 as core 1 on socket 1 00:05:55.883 EAL: Detected lcore 68 as core 2 on socket 1 00:05:55.883 EAL: Detected lcore 69 as core 3 on socket 1 00:05:55.883 EAL: Detected lcore 70 as core 4 on socket 1 00:05:55.883 EAL: Detected lcore 71 as core 5 on socket 1 00:05:55.883 EAL: Detected lcore 72 as core 8 on socket 1 00:05:55.883 EAL: Detected lcore 73 as core 9 on socket 1 00:05:55.883 EAL: Detected lcore 74 as core 10 on socket 1 00:05:55.883 EAL: Detected lcore 75 as core 11 on socket 1 00:05:55.883 EAL: Detected lcore 76 as core 12 on socket 1 00:05:55.883 EAL: Detected lcore 77 as core 16 on socket 1 00:05:55.883 EAL: Detected lcore 78 as core 17 on socket 1 00:05:55.883 EAL: Detected lcore 79 as core 18 on socket 1 00:05:55.883 EAL: Detected lcore 80 as core 19 on socket 1 00:05:55.883 EAL: Detected lcore 81 as core 20 on socket 1 00:05:55.883 EAL: Detected lcore 82 as core 21 on socket 1 00:05:55.883 EAL: Detected lcore 83 as core 24 on socket 1 00:05:55.883 EAL: Detected lcore 84 as core 25 on socket 1 00:05:55.883 EAL: Detected lcore 85 as core 26 on socket 1 00:05:55.883 EAL: Detected lcore 86 as core 27 on socket 1 00:05:55.883 EAL: Detected lcore 87 as core 28 on socket 1 00:05:55.883 EAL: Maximum logical cores by configuration: 128 00:05:55.883 EAL: Detected CPU lcores: 88 00:05:55.883 EAL: Detected NUMA nodes: 2 00:05:55.883 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:05:55.883 EAL: Checking presence of .so 'librte_eal.so.24' 00:05:55.883 EAL: Checking presence of .so 'librte_eal.so' 00:05:55.883 EAL: Detected static linkage of DPDK 00:05:55.883 EAL: No shared files mode enabled, IPC will be disabled 00:05:55.883 EAL: Bus pci wants IOVA as 'DC' 00:05:55.883 EAL: Buses did not request a specific IOVA mode. 00:05:55.883 EAL: IOMMU is available, selecting IOVA as VA mode. 00:05:55.883 EAL: Selected IOVA mode 'VA' 00:05:55.883 EAL: No free 2048 kB hugepages reported on node 1 00:05:55.883 EAL: Probing VFIO support... 00:05:55.883 EAL: IOMMU type 1 (Type 1) is supported 00:05:55.883 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:55.883 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:55.883 EAL: VFIO support initialized 00:05:55.883 EAL: Ask a virtual area of 0x2e000 bytes 00:05:55.883 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:55.883 EAL: Setting up physically contiguous memory... 00:05:55.883 EAL: Setting maximum number of open files to 524288 00:05:55.883 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:55.883 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:55.883 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:55.883 EAL: Ask a virtual area of 0x61000 bytes 00:05:55.883 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:55.883 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:55.884 EAL: Ask a virtual area of 0x400000000 bytes 00:05:55.884 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:55.884 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:55.884 EAL: Ask a virtual area of 0x61000 bytes 00:05:55.884 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:55.884 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:55.884 EAL: Ask a virtual area of 0x400000000 bytes 00:05:55.884 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:55.884 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:55.884 EAL: Ask a virtual area of 0x61000 bytes 00:05:55.884 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:55.884 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:55.884 EAL: Ask a virtual area of 0x400000000 bytes 00:05:55.884 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:55.884 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:55.884 EAL: Ask a virtual area of 0x61000 bytes 00:05:55.884 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:55.884 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:55.884 EAL: Ask a virtual area of 0x400000000 bytes 00:05:55.884 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:55.884 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:55.884 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:55.884 EAL: Ask a virtual area of 0x61000 bytes 00:05:55.884 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:55.884 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:55.884 EAL: Ask a virtual area of 0x400000000 bytes 00:05:55.884 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:55.884 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:55.884 EAL: Ask a virtual area of 0x61000 bytes 00:05:55.884 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:55.884 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:55.884 EAL: Ask a virtual area of 0x400000000 bytes 00:05:55.884 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:55.884 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:55.884 EAL: Ask a virtual area of 0x61000 bytes 00:05:55.884 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:55.884 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:55.884 EAL: Ask a virtual area of 0x400000000 bytes 00:05:55.884 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:55.884 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:55.884 EAL: Ask a virtual area of 0x61000 bytes 00:05:55.884 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:55.884 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:55.884 EAL: Ask a virtual area of 0x400000000 bytes 00:05:55.884 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:55.884 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:55.884 EAL: Hugepages will be freed exactly as allocated. 00:05:55.884 EAL: No shared files mode enabled, IPC is disabled 00:05:55.884 EAL: No shared files mode enabled, IPC is disabled 00:05:55.884 EAL: TSC frequency is ~2100000 KHz 00:05:55.884 EAL: Main lcore 0 is ready (tid=7fd33ccefa00;cpuset=[0]) 00:05:55.884 EAL: Trying to obtain current memory policy. 00:05:55.884 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:55.884 EAL: Restoring previous memory policy: 0 00:05:55.884 EAL: request: mp_malloc_sync 00:05:55.884 EAL: No shared files mode enabled, IPC is disabled 00:05:55.884 EAL: Heap on socket 0 was expanded by 2MB 00:05:55.884 EAL: No shared files mode enabled, IPC is disabled 00:05:55.884 EAL: Mem event callback 'spdk:(nil)' registered 00:05:55.884 00:05:55.884 00:05:55.884 CUnit - A unit testing framework for C - Version 2.1-3 00:05:55.884 http://cunit.sourceforge.net/ 00:05:55.884 00:05:55.884 00:05:55.884 Suite: components_suite 00:05:55.884 Test: vtophys_malloc_test ...passed 00:05:55.884 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:55.884 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:55.884 EAL: Restoring previous memory policy: 4 00:05:55.884 EAL: Calling mem event callback 'spdk:(nil)' 00:05:55.884 EAL: request: mp_malloc_sync 00:05:55.884 EAL: No shared files mode enabled, IPC is disabled 00:05:55.884 EAL: Heap on socket 0 was expanded by 4MB 00:05:55.884 EAL: Calling mem event callback 'spdk:(nil)' 00:05:55.884 EAL: request: mp_malloc_sync 00:05:55.884 EAL: No shared files mode enabled, IPC is disabled 00:05:55.884 EAL: Heap on socket 0 was shrunk by 4MB 00:05:55.884 EAL: Trying to obtain current memory policy. 00:05:55.884 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:55.884 EAL: Restoring previous memory policy: 4 00:05:55.884 EAL: Calling mem event callback 'spdk:(nil)' 00:05:55.884 EAL: request: mp_malloc_sync 00:05:55.884 EAL: No shared files mode enabled, IPC is disabled 00:05:55.884 EAL: Heap on socket 0 was expanded by 6MB 00:05:55.884 EAL: Calling mem event callback 'spdk:(nil)' 00:05:55.884 EAL: request: mp_malloc_sync 00:05:55.884 EAL: No shared files mode enabled, IPC is disabled 00:05:55.884 EAL: Heap on socket 0 was shrunk by 6MB 00:05:55.884 EAL: Trying to obtain current memory policy. 00:05:55.884 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:55.884 EAL: Restoring previous memory policy: 4 00:05:55.884 EAL: Calling mem event callback 'spdk:(nil)' 00:05:55.884 EAL: request: mp_malloc_sync 00:05:55.884 EAL: No shared files mode enabled, IPC is disabled 00:05:55.884 EAL: Heap on socket 0 was expanded by 10MB 00:05:55.884 EAL: Calling mem event callback 'spdk:(nil)' 00:05:55.884 EAL: request: mp_malloc_sync 00:05:55.884 EAL: No shared files mode enabled, IPC is disabled 00:05:55.884 EAL: Heap on socket 0 was shrunk by 10MB 00:05:55.884 EAL: Trying to obtain current memory policy. 00:05:55.884 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:55.884 EAL: Restoring previous memory policy: 4 00:05:55.884 EAL: Calling mem event callback 'spdk:(nil)' 00:05:55.884 EAL: request: mp_malloc_sync 00:05:55.884 EAL: No shared files mode enabled, IPC is disabled 00:05:55.884 EAL: Heap on socket 0 was expanded by 18MB 00:05:55.884 EAL: Calling mem event callback 'spdk:(nil)' 00:05:55.884 EAL: request: mp_malloc_sync 00:05:55.884 EAL: No shared files mode enabled, IPC is disabled 00:05:55.884 EAL: Heap on socket 0 was shrunk by 18MB 00:05:55.884 EAL: Trying to obtain current memory policy. 00:05:55.884 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:55.884 EAL: Restoring previous memory policy: 4 00:05:55.884 EAL: Calling mem event callback 'spdk:(nil)' 00:05:55.884 EAL: request: mp_malloc_sync 00:05:55.884 EAL: No shared files mode enabled, IPC is disabled 00:05:55.884 EAL: Heap on socket 0 was expanded by 34MB 00:05:55.884 EAL: Calling mem event callback 'spdk:(nil)' 00:05:55.884 EAL: request: mp_malloc_sync 00:05:55.884 EAL: No shared files mode enabled, IPC is disabled 00:05:55.884 EAL: Heap on socket 0 was shrunk by 34MB 00:05:55.884 EAL: Trying to obtain current memory policy. 00:05:55.884 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:55.884 EAL: Restoring previous memory policy: 4 00:05:55.884 EAL: Calling mem event callback 'spdk:(nil)' 00:05:55.884 EAL: request: mp_malloc_sync 00:05:55.884 EAL: No shared files mode enabled, IPC is disabled 00:05:55.884 EAL: Heap on socket 0 was expanded by 66MB 00:05:55.884 EAL: Calling mem event callback 'spdk:(nil)' 00:05:55.884 EAL: request: mp_malloc_sync 00:05:55.884 EAL: No shared files mode enabled, IPC is disabled 00:05:55.884 EAL: Heap on socket 0 was shrunk by 66MB 00:05:55.884 EAL: Trying to obtain current memory policy. 00:05:55.884 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:55.884 EAL: Restoring previous memory policy: 4 00:05:55.884 EAL: Calling mem event callback 'spdk:(nil)' 00:05:55.884 EAL: request: mp_malloc_sync 00:05:55.884 EAL: No shared files mode enabled, IPC is disabled 00:05:55.884 EAL: Heap on socket 0 was expanded by 130MB 00:05:55.884 EAL: Calling mem event callback 'spdk:(nil)' 00:05:55.884 EAL: request: mp_malloc_sync 00:05:55.884 EAL: No shared files mode enabled, IPC is disabled 00:05:55.884 EAL: Heap on socket 0 was shrunk by 130MB 00:05:55.884 EAL: Trying to obtain current memory policy. 00:05:55.884 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:55.884 EAL: Restoring previous memory policy: 4 00:05:55.884 EAL: Calling mem event callback 'spdk:(nil)' 00:05:55.884 EAL: request: mp_malloc_sync 00:05:55.884 EAL: No shared files mode enabled, IPC is disabled 00:05:55.884 EAL: Heap on socket 0 was expanded by 258MB 00:05:55.884 EAL: Calling mem event callback 'spdk:(nil)' 00:05:56.144 EAL: request: mp_malloc_sync 00:05:56.144 EAL: No shared files mode enabled, IPC is disabled 00:05:56.144 EAL: Heap on socket 0 was shrunk by 258MB 00:05:56.144 EAL: Trying to obtain current memory policy. 00:05:56.144 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:56.144 EAL: Restoring previous memory policy: 4 00:05:56.144 EAL: Calling mem event callback 'spdk:(nil)' 00:05:56.144 EAL: request: mp_malloc_sync 00:05:56.144 EAL: No shared files mode enabled, IPC is disabled 00:05:56.144 EAL: Heap on socket 0 was expanded by 514MB 00:05:56.144 EAL: Calling mem event callback 'spdk:(nil)' 00:05:56.403 EAL: request: mp_malloc_sync 00:05:56.403 EAL: No shared files mode enabled, IPC is disabled 00:05:56.403 EAL: Heap on socket 0 was shrunk by 514MB 00:05:56.403 EAL: Trying to obtain current memory policy. 00:05:56.403 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:56.403 EAL: Restoring previous memory policy: 4 00:05:56.403 EAL: Calling mem event callback 'spdk:(nil)' 00:05:56.403 EAL: request: mp_malloc_sync 00:05:56.403 EAL: No shared files mode enabled, IPC is disabled 00:05:56.403 EAL: Heap on socket 0 was expanded by 1026MB 00:05:56.662 EAL: Calling mem event callback 'spdk:(nil)' 00:05:56.922 EAL: request: mp_malloc_sync 00:05:56.922 EAL: No shared files mode enabled, IPC is disabled 00:05:56.922 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:56.922 passed 00:05:56.922 00:05:56.922 Run Summary: Type Total Ran Passed Failed Inactive 00:05:56.922 suites 1 1 n/a 0 0 00:05:56.922 tests 2 2 2 0 0 00:05:56.922 asserts 497 497 497 0 n/a 00:05:56.922 00:05:56.922 Elapsed time = 0.970 seconds 00:05:56.922 EAL: Calling mem event callback 'spdk:(nil)' 00:05:56.922 EAL: request: mp_malloc_sync 00:05:56.922 EAL: No shared files mode enabled, IPC is disabled 00:05:56.922 EAL: Heap on socket 0 was shrunk by 2MB 00:05:56.922 EAL: No shared files mode enabled, IPC is disabled 00:05:56.922 EAL: No shared files mode enabled, IPC is disabled 00:05:56.922 EAL: No shared files mode enabled, IPC is disabled 00:05:56.922 00:05:56.922 real 0m1.095s 00:05:56.922 user 0m0.640s 00:05:56.922 sys 0m0.425s 00:05:56.922 22:44:54 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:56.922 22:44:54 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:56.922 ************************************ 00:05:56.922 END TEST env_vtophys 00:05:56.922 ************************************ 00:05:56.922 22:44:54 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/pci/pci_ut 00:05:56.922 22:44:54 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:56.922 22:44:54 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:56.922 22:44:54 env -- common/autotest_common.sh@10 -- # set +x 00:05:56.922 ************************************ 00:05:56.922 START TEST env_pci 00:05:56.922 ************************************ 00:05:56.922 22:44:54 env.env_pci -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/pci/pci_ut 00:05:56.922 00:05:56.922 00:05:56.922 CUnit - A unit testing framework for C - Version 2.1-3 00:05:56.922 http://cunit.sourceforge.net/ 00:05:56.922 00:05:56.922 00:05:56.922 Suite: pci 00:05:56.922 Test: pci_hook ...[2024-07-24 22:44:54.965930] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/pci.c:1041:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 463484 has claimed it 00:05:56.922 EAL: Cannot find device (10000:00:01.0) 00:05:56.922 EAL: Failed to attach device on primary process 00:05:56.922 passed 00:05:56.922 00:05:56.922 Run Summary: Type Total Ran Passed Failed Inactive 00:05:56.922 suites 1 1 n/a 0 0 00:05:56.922 tests 1 1 1 0 0 00:05:56.922 asserts 25 25 25 0 n/a 00:05:56.922 00:05:56.922 Elapsed time = 0.031 seconds 00:05:56.922 00:05:56.922 real 0m0.047s 00:05:56.922 user 0m0.012s 00:05:56.922 sys 0m0.035s 00:05:56.922 22:44:54 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:56.922 22:44:54 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:56.922 ************************************ 00:05:56.922 END TEST env_pci 00:05:56.922 ************************************ 00:05:56.922 22:44:55 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:56.922 22:44:55 env -- env/env.sh@15 -- # uname 00:05:56.922 22:44:55 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:56.922 22:44:55 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:56.922 22:44:55 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:56.922 22:44:55 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:05:56.922 22:44:55 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:56.922 22:44:55 env -- common/autotest_common.sh@10 -- # set +x 00:05:56.922 ************************************ 00:05:56.923 START TEST env_dpdk_post_init 00:05:56.923 ************************************ 00:05:56.923 22:44:55 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:56.923 EAL: Detected CPU lcores: 88 00:05:56.923 EAL: Detected NUMA nodes: 2 00:05:56.923 EAL: Detected static linkage of DPDK 00:05:56.923 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:56.923 EAL: Selected IOVA mode 'VA' 00:05:56.923 EAL: No free 2048 kB hugepages reported on node 1 00:05:56.923 EAL: VFIO support initialized 00:05:56.923 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:57.182 EAL: Using IOMMU type 1 (Type 1) 00:05:58.266 EAL: Probe PCI driver: spdk_nvme (8086:0953) device: 0000:dc:00.0 (socket 1) 00:05:59.241 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:dd:00.0 (socket 1) 00:06:00.136 EAL: Probe PCI driver: spdk_nvme (8086:0953) device: 0000:de:00.0 (socket 1) 00:06:01.134 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:df:00.0 (socket 1) 00:06:05.348 EAL: Releasing PCI mapped resource for 0000:de:00.0 00:06:05.348 EAL: Calling pci_unmap_resource for 0000:de:00.0 at 0x202001008000 00:06:05.348 EAL: Releasing PCI mapped resource for 0000:df:00.0 00:06:05.348 EAL: Calling pci_unmap_resource for 0000:df:00.0 at 0x20200100c000 00:06:05.607 EAL: Releasing PCI mapped resource for 0000:dd:00.0 00:06:05.607 EAL: Calling pci_unmap_resource for 0000:dd:00.0 at 0x202001004000 00:06:05.865 EAL: Releasing PCI mapped resource for 0000:dc:00.0 00:06:05.865 EAL: Calling pci_unmap_resource for 0000:dc:00.0 at 0x202001000000 00:06:06.124 Starting DPDK initialization... 00:06:06.124 Starting SPDK post initialization... 00:06:06.124 SPDK NVMe probe 00:06:06.124 Attaching to 0000:dc:00.0 00:06:06.124 Attaching to 0000:dd:00.0 00:06:06.124 Attaching to 0000:de:00.0 00:06:06.124 Attaching to 0000:df:00.0 00:06:06.124 Attached to 0000:dc:00.0 00:06:06.124 Attached to 0000:de:00.0 00:06:06.124 Attached to 0000:dd:00.0 00:06:06.124 Attached to 0000:df:00.0 00:06:06.124 Cleaning up... 00:06:06.124 00:06:06.124 real 0m9.244s 00:06:06.124 user 0m6.035s 00:06:06.124 sys 0m0.248s 00:06:06.124 22:45:04 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:06.124 22:45:04 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:06:06.124 ************************************ 00:06:06.124 END TEST env_dpdk_post_init 00:06:06.124 ************************************ 00:06:06.383 22:45:04 env -- env/env.sh@26 -- # uname 00:06:06.383 22:45:04 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:06:06.383 22:45:04 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:06:06.383 22:45:04 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:06.383 22:45:04 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:06.383 22:45:04 env -- common/autotest_common.sh@10 -- # set +x 00:06:06.383 ************************************ 00:06:06.383 START TEST env_mem_callbacks 00:06:06.383 ************************************ 00:06:06.383 22:45:04 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:06:06.383 EAL: Detected CPU lcores: 88 00:06:06.383 EAL: Detected NUMA nodes: 2 00:06:06.383 EAL: Detected static linkage of DPDK 00:06:06.383 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:06.383 EAL: Selected IOVA mode 'VA' 00:06:06.383 EAL: No free 2048 kB hugepages reported on node 1 00:06:06.383 EAL: VFIO support initialized 00:06:06.383 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:06.383 00:06:06.383 00:06:06.383 CUnit - A unit testing framework for C - Version 2.1-3 00:06:06.383 http://cunit.sourceforge.net/ 00:06:06.383 00:06:06.383 00:06:06.383 Suite: memory 00:06:06.383 Test: test ... 00:06:06.383 register 0x200000200000 2097152 00:06:06.383 malloc 3145728 00:06:06.383 register 0x200000400000 4194304 00:06:06.383 buf 0x200000500000 len 3145728 PASSED 00:06:06.383 malloc 64 00:06:06.383 buf 0x2000004fff40 len 64 PASSED 00:06:06.383 malloc 4194304 00:06:06.383 register 0x200000800000 6291456 00:06:06.383 buf 0x200000a00000 len 4194304 PASSED 00:06:06.383 free 0x200000500000 3145728 00:06:06.383 free 0x2000004fff40 64 00:06:06.383 unregister 0x200000400000 4194304 PASSED 00:06:06.383 free 0x200000a00000 4194304 00:06:06.383 unregister 0x200000800000 6291456 PASSED 00:06:06.383 malloc 8388608 00:06:06.383 register 0x200000400000 10485760 00:06:06.383 buf 0x200000600000 len 8388608 PASSED 00:06:06.383 free 0x200000600000 8388608 00:06:06.383 unregister 0x200000400000 10485760 PASSED 00:06:06.383 passed 00:06:06.383 00:06:06.383 Run Summary: Type Total Ran Passed Failed Inactive 00:06:06.383 suites 1 1 n/a 0 0 00:06:06.383 tests 1 1 1 0 0 00:06:06.383 asserts 15 15 15 0 n/a 00:06:06.383 00:06:06.383 Elapsed time = 0.008 seconds 00:06:06.383 00:06:06.383 real 0m0.057s 00:06:06.383 user 0m0.021s 00:06:06.383 sys 0m0.036s 00:06:06.383 22:45:04 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:06.383 22:45:04 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:06:06.383 ************************************ 00:06:06.383 END TEST env_mem_callbacks 00:06:06.383 ************************************ 00:06:06.383 00:06:06.383 real 0m10.977s 00:06:06.383 user 0m6.976s 00:06:06.383 sys 0m1.040s 00:06:06.383 22:45:04 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:06.383 22:45:04 env -- common/autotest_common.sh@10 -- # set +x 00:06:06.383 ************************************ 00:06:06.383 END TEST env 00:06:06.383 ************************************ 00:06:06.383 22:45:04 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/rpc.sh 00:06:06.383 22:45:04 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:06.383 22:45:04 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:06.383 22:45:04 -- common/autotest_common.sh@10 -- # set +x 00:06:06.383 ************************************ 00:06:06.383 START TEST rpc 00:06:06.383 ************************************ 00:06:06.384 22:45:04 rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/rpc.sh 00:06:06.643 * Looking for test storage... 00:06:06.643 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:06:06.643 22:45:04 rpc -- rpc/rpc.sh@65 -- # spdk_pid=465219 00:06:06.643 22:45:04 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:06:06.643 22:45:04 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:06.643 22:45:04 rpc -- rpc/rpc.sh@67 -- # waitforlisten 465219 00:06:06.643 22:45:04 rpc -- common/autotest_common.sh@831 -- # '[' -z 465219 ']' 00:06:06.643 22:45:04 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:06.643 22:45:04 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:06.643 22:45:04 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:06.643 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:06.643 22:45:04 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:06.643 22:45:04 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:06.643 [2024-07-24 22:45:04.664421] Starting SPDK v24.09-pre git sha1 f41dbc235 / DPDK 24.03.0 initialization... 00:06:06.643 [2024-07-24 22:45:04.664507] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid465219 ] 00:06:06.643 EAL: No free 2048 kB hugepages reported on node 1 00:06:06.643 [2024-07-24 22:45:04.732925] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.643 [2024-07-24 22:45:04.815484] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:06:06.643 [2024-07-24 22:45:04.815518] app.c: 607:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 465219' to capture a snapshot of events at runtime. 00:06:06.643 [2024-07-24 22:45:04.815524] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:06.643 [2024-07-24 22:45:04.815530] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:06.643 [2024-07-24 22:45:04.815535] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid465219 for offline analysis/debug. 00:06:06.643 [2024-07-24 22:45:04.815549] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.582 22:45:05 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:07.582 22:45:05 rpc -- common/autotest_common.sh@864 -- # return 0 00:06:07.582 22:45:05 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:06:07.582 22:45:05 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:06:07.582 22:45:05 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:06:07.582 22:45:05 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:06:07.582 22:45:05 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:07.582 22:45:05 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:07.582 22:45:05 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:07.582 ************************************ 00:06:07.582 START TEST rpc_integrity 00:06:07.582 ************************************ 00:06:07.582 22:45:05 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:06:07.582 22:45:05 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:07.582 22:45:05 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:07.582 22:45:05 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:07.582 22:45:05 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:07.582 22:45:05 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:07.582 22:45:05 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:07.582 22:45:05 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:07.582 22:45:05 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:07.582 22:45:05 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:07.582 22:45:05 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:07.582 22:45:05 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:07.582 22:45:05 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:06:07.582 22:45:05 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:07.582 22:45:05 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:07.582 22:45:05 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:07.582 22:45:05 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:07.582 22:45:05 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:07.582 { 00:06:07.582 "name": "Malloc0", 00:06:07.582 "aliases": [ 00:06:07.582 "a5c3d460-74d3-4775-9c9c-7a9010f04ee0" 00:06:07.582 ], 00:06:07.582 "product_name": "Malloc disk", 00:06:07.582 "block_size": 512, 00:06:07.582 "num_blocks": 16384, 00:06:07.582 "uuid": "a5c3d460-74d3-4775-9c9c-7a9010f04ee0", 00:06:07.582 "assigned_rate_limits": { 00:06:07.582 "rw_ios_per_sec": 0, 00:06:07.582 "rw_mbytes_per_sec": 0, 00:06:07.582 "r_mbytes_per_sec": 0, 00:06:07.582 "w_mbytes_per_sec": 0 00:06:07.582 }, 00:06:07.582 "claimed": false, 00:06:07.582 "zoned": false, 00:06:07.582 "supported_io_types": { 00:06:07.582 "read": true, 00:06:07.582 "write": true, 00:06:07.582 "unmap": true, 00:06:07.582 "flush": true, 00:06:07.582 "reset": true, 00:06:07.582 "nvme_admin": false, 00:06:07.582 "nvme_io": false, 00:06:07.582 "nvme_io_md": false, 00:06:07.582 "write_zeroes": true, 00:06:07.582 "zcopy": true, 00:06:07.582 "get_zone_info": false, 00:06:07.582 "zone_management": false, 00:06:07.582 "zone_append": false, 00:06:07.582 "compare": false, 00:06:07.582 "compare_and_write": false, 00:06:07.582 "abort": true, 00:06:07.582 "seek_hole": false, 00:06:07.582 "seek_data": false, 00:06:07.582 "copy": true, 00:06:07.582 "nvme_iov_md": false 00:06:07.582 }, 00:06:07.582 "memory_domains": [ 00:06:07.582 { 00:06:07.582 "dma_device_id": "system", 00:06:07.582 "dma_device_type": 1 00:06:07.582 }, 00:06:07.582 { 00:06:07.582 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:07.582 "dma_device_type": 2 00:06:07.582 } 00:06:07.582 ], 00:06:07.582 "driver_specific": {} 00:06:07.582 } 00:06:07.582 ]' 00:06:07.582 22:45:05 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:07.582 22:45:05 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:07.582 22:45:05 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:06:07.582 22:45:05 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:07.583 22:45:05 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:07.583 [2024-07-24 22:45:05.646136] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:06:07.583 [2024-07-24 22:45:05.646164] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:07.583 [2024-07-24 22:45:05.646178] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x494b240 00:06:07.583 [2024-07-24 22:45:05.646185] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:07.583 [2024-07-24 22:45:05.646931] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:07.583 [2024-07-24 22:45:05.646952] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:07.583 Passthru0 00:06:07.583 22:45:05 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:07.583 22:45:05 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:07.583 22:45:05 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:07.583 22:45:05 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:07.583 22:45:05 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:07.583 22:45:05 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:07.583 { 00:06:07.583 "name": "Malloc0", 00:06:07.583 "aliases": [ 00:06:07.583 "a5c3d460-74d3-4775-9c9c-7a9010f04ee0" 00:06:07.583 ], 00:06:07.583 "product_name": "Malloc disk", 00:06:07.583 "block_size": 512, 00:06:07.583 "num_blocks": 16384, 00:06:07.583 "uuid": "a5c3d460-74d3-4775-9c9c-7a9010f04ee0", 00:06:07.583 "assigned_rate_limits": { 00:06:07.583 "rw_ios_per_sec": 0, 00:06:07.583 "rw_mbytes_per_sec": 0, 00:06:07.583 "r_mbytes_per_sec": 0, 00:06:07.583 "w_mbytes_per_sec": 0 00:06:07.583 }, 00:06:07.583 "claimed": true, 00:06:07.583 "claim_type": "exclusive_write", 00:06:07.583 "zoned": false, 00:06:07.583 "supported_io_types": { 00:06:07.583 "read": true, 00:06:07.583 "write": true, 00:06:07.583 "unmap": true, 00:06:07.583 "flush": true, 00:06:07.583 "reset": true, 00:06:07.583 "nvme_admin": false, 00:06:07.583 "nvme_io": false, 00:06:07.583 "nvme_io_md": false, 00:06:07.583 "write_zeroes": true, 00:06:07.583 "zcopy": true, 00:06:07.583 "get_zone_info": false, 00:06:07.583 "zone_management": false, 00:06:07.583 "zone_append": false, 00:06:07.583 "compare": false, 00:06:07.583 "compare_and_write": false, 00:06:07.583 "abort": true, 00:06:07.583 "seek_hole": false, 00:06:07.583 "seek_data": false, 00:06:07.583 "copy": true, 00:06:07.583 "nvme_iov_md": false 00:06:07.583 }, 00:06:07.583 "memory_domains": [ 00:06:07.583 { 00:06:07.583 "dma_device_id": "system", 00:06:07.583 "dma_device_type": 1 00:06:07.583 }, 00:06:07.583 { 00:06:07.583 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:07.583 "dma_device_type": 2 00:06:07.583 } 00:06:07.583 ], 00:06:07.583 "driver_specific": {} 00:06:07.583 }, 00:06:07.583 { 00:06:07.583 "name": "Passthru0", 00:06:07.583 "aliases": [ 00:06:07.583 "9a2e3495-d078-5792-8af0-108cb00864e7" 00:06:07.583 ], 00:06:07.583 "product_name": "passthru", 00:06:07.583 "block_size": 512, 00:06:07.583 "num_blocks": 16384, 00:06:07.583 "uuid": "9a2e3495-d078-5792-8af0-108cb00864e7", 00:06:07.583 "assigned_rate_limits": { 00:06:07.583 "rw_ios_per_sec": 0, 00:06:07.583 "rw_mbytes_per_sec": 0, 00:06:07.583 "r_mbytes_per_sec": 0, 00:06:07.583 "w_mbytes_per_sec": 0 00:06:07.583 }, 00:06:07.583 "claimed": false, 00:06:07.583 "zoned": false, 00:06:07.583 "supported_io_types": { 00:06:07.583 "read": true, 00:06:07.583 "write": true, 00:06:07.583 "unmap": true, 00:06:07.583 "flush": true, 00:06:07.583 "reset": true, 00:06:07.583 "nvme_admin": false, 00:06:07.583 "nvme_io": false, 00:06:07.583 "nvme_io_md": false, 00:06:07.583 "write_zeroes": true, 00:06:07.583 "zcopy": true, 00:06:07.583 "get_zone_info": false, 00:06:07.583 "zone_management": false, 00:06:07.583 "zone_append": false, 00:06:07.583 "compare": false, 00:06:07.583 "compare_and_write": false, 00:06:07.583 "abort": true, 00:06:07.583 "seek_hole": false, 00:06:07.583 "seek_data": false, 00:06:07.583 "copy": true, 00:06:07.583 "nvme_iov_md": false 00:06:07.583 }, 00:06:07.583 "memory_domains": [ 00:06:07.583 { 00:06:07.583 "dma_device_id": "system", 00:06:07.583 "dma_device_type": 1 00:06:07.583 }, 00:06:07.583 { 00:06:07.583 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:07.583 "dma_device_type": 2 00:06:07.583 } 00:06:07.583 ], 00:06:07.583 "driver_specific": { 00:06:07.583 "passthru": { 00:06:07.583 "name": "Passthru0", 00:06:07.583 "base_bdev_name": "Malloc0" 00:06:07.583 } 00:06:07.583 } 00:06:07.583 } 00:06:07.583 ]' 00:06:07.583 22:45:05 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:07.583 22:45:05 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:07.583 22:45:05 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:07.583 22:45:05 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:07.583 22:45:05 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:07.583 22:45:05 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:07.583 22:45:05 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:06:07.583 22:45:05 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:07.583 22:45:05 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:07.583 22:45:05 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:07.583 22:45:05 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:07.583 22:45:05 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:07.583 22:45:05 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:07.583 22:45:05 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:07.583 22:45:05 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:07.583 22:45:05 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:07.843 22:45:05 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:07.843 00:06:07.843 real 0m0.280s 00:06:07.843 user 0m0.177s 00:06:07.843 sys 0m0.038s 00:06:07.843 22:45:05 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:07.843 22:45:05 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:07.843 ************************************ 00:06:07.843 END TEST rpc_integrity 00:06:07.843 ************************************ 00:06:07.843 22:45:05 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:06:07.843 22:45:05 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:07.843 22:45:05 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:07.843 22:45:05 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:07.843 ************************************ 00:06:07.843 START TEST rpc_plugins 00:06:07.843 ************************************ 00:06:07.843 22:45:05 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:06:07.843 22:45:05 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:06:07.843 22:45:05 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:07.843 22:45:05 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:07.843 22:45:05 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:07.843 22:45:05 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:06:07.843 22:45:05 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:06:07.843 22:45:05 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:07.843 22:45:05 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:07.843 22:45:05 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:07.843 22:45:05 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:06:07.843 { 00:06:07.843 "name": "Malloc1", 00:06:07.843 "aliases": [ 00:06:07.843 "84f61cff-0744-4824-a0ed-afb66938cdc0" 00:06:07.843 ], 00:06:07.843 "product_name": "Malloc disk", 00:06:07.843 "block_size": 4096, 00:06:07.843 "num_blocks": 256, 00:06:07.843 "uuid": "84f61cff-0744-4824-a0ed-afb66938cdc0", 00:06:07.843 "assigned_rate_limits": { 00:06:07.843 "rw_ios_per_sec": 0, 00:06:07.843 "rw_mbytes_per_sec": 0, 00:06:07.843 "r_mbytes_per_sec": 0, 00:06:07.843 "w_mbytes_per_sec": 0 00:06:07.843 }, 00:06:07.843 "claimed": false, 00:06:07.843 "zoned": false, 00:06:07.843 "supported_io_types": { 00:06:07.843 "read": true, 00:06:07.843 "write": true, 00:06:07.843 "unmap": true, 00:06:07.843 "flush": true, 00:06:07.843 "reset": true, 00:06:07.843 "nvme_admin": false, 00:06:07.843 "nvme_io": false, 00:06:07.843 "nvme_io_md": false, 00:06:07.843 "write_zeroes": true, 00:06:07.843 "zcopy": true, 00:06:07.843 "get_zone_info": false, 00:06:07.843 "zone_management": false, 00:06:07.843 "zone_append": false, 00:06:07.843 "compare": false, 00:06:07.843 "compare_and_write": false, 00:06:07.843 "abort": true, 00:06:07.843 "seek_hole": false, 00:06:07.843 "seek_data": false, 00:06:07.843 "copy": true, 00:06:07.843 "nvme_iov_md": false 00:06:07.843 }, 00:06:07.843 "memory_domains": [ 00:06:07.843 { 00:06:07.843 "dma_device_id": "system", 00:06:07.843 "dma_device_type": 1 00:06:07.843 }, 00:06:07.843 { 00:06:07.843 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:07.843 "dma_device_type": 2 00:06:07.843 } 00:06:07.843 ], 00:06:07.843 "driver_specific": {} 00:06:07.843 } 00:06:07.843 ]' 00:06:07.843 22:45:05 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:06:07.843 22:45:05 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:06:07.843 22:45:05 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:06:07.843 22:45:05 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:07.843 22:45:05 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:07.843 22:45:05 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:07.843 22:45:05 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:06:07.843 22:45:05 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:07.843 22:45:05 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:07.843 22:45:05 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:07.843 22:45:05 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:06:07.843 22:45:05 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:06:07.843 22:45:05 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:06:07.843 00:06:07.843 real 0m0.132s 00:06:07.843 user 0m0.070s 00:06:07.843 sys 0m0.025s 00:06:07.843 22:45:05 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:07.843 22:45:05 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:07.843 ************************************ 00:06:07.843 END TEST rpc_plugins 00:06:07.843 ************************************ 00:06:07.843 22:45:06 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:06:07.843 22:45:06 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:07.843 22:45:06 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:07.843 22:45:06 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:08.102 ************************************ 00:06:08.102 START TEST rpc_trace_cmd_test 00:06:08.102 ************************************ 00:06:08.102 22:45:06 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:06:08.102 22:45:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:06:08.102 22:45:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:06:08.102 22:45:06 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:08.102 22:45:06 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:08.102 22:45:06 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:08.102 22:45:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:06:08.102 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid465219", 00:06:08.102 "tpoint_group_mask": "0x8", 00:06:08.102 "iscsi_conn": { 00:06:08.102 "mask": "0x2", 00:06:08.102 "tpoint_mask": "0x0" 00:06:08.102 }, 00:06:08.102 "scsi": { 00:06:08.102 "mask": "0x4", 00:06:08.102 "tpoint_mask": "0x0" 00:06:08.102 }, 00:06:08.102 "bdev": { 00:06:08.102 "mask": "0x8", 00:06:08.102 "tpoint_mask": "0xffffffffffffffff" 00:06:08.102 }, 00:06:08.102 "nvmf_rdma": { 00:06:08.102 "mask": "0x10", 00:06:08.102 "tpoint_mask": "0x0" 00:06:08.102 }, 00:06:08.102 "nvmf_tcp": { 00:06:08.102 "mask": "0x20", 00:06:08.102 "tpoint_mask": "0x0" 00:06:08.102 }, 00:06:08.102 "ftl": { 00:06:08.102 "mask": "0x40", 00:06:08.102 "tpoint_mask": "0x0" 00:06:08.102 }, 00:06:08.102 "blobfs": { 00:06:08.102 "mask": "0x80", 00:06:08.102 "tpoint_mask": "0x0" 00:06:08.102 }, 00:06:08.102 "dsa": { 00:06:08.102 "mask": "0x200", 00:06:08.102 "tpoint_mask": "0x0" 00:06:08.102 }, 00:06:08.102 "thread": { 00:06:08.102 "mask": "0x400", 00:06:08.102 "tpoint_mask": "0x0" 00:06:08.102 }, 00:06:08.102 "nvme_pcie": { 00:06:08.102 "mask": "0x800", 00:06:08.102 "tpoint_mask": "0x0" 00:06:08.102 }, 00:06:08.102 "iaa": { 00:06:08.102 "mask": "0x1000", 00:06:08.102 "tpoint_mask": "0x0" 00:06:08.102 }, 00:06:08.102 "nvme_tcp": { 00:06:08.102 "mask": "0x2000", 00:06:08.102 "tpoint_mask": "0x0" 00:06:08.102 }, 00:06:08.102 "bdev_nvme": { 00:06:08.102 "mask": "0x4000", 00:06:08.102 "tpoint_mask": "0x0" 00:06:08.102 }, 00:06:08.102 "sock": { 00:06:08.102 "mask": "0x8000", 00:06:08.102 "tpoint_mask": "0x0" 00:06:08.102 } 00:06:08.102 }' 00:06:08.102 22:45:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:06:08.102 22:45:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:06:08.102 22:45:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:06:08.102 22:45:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:06:08.102 22:45:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:06:08.102 22:45:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:06:08.102 22:45:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:06:08.102 22:45:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:06:08.102 22:45:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:06:08.102 22:45:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:06:08.102 00:06:08.102 real 0m0.209s 00:06:08.102 user 0m0.177s 00:06:08.102 sys 0m0.023s 00:06:08.102 22:45:06 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:08.102 22:45:06 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:08.102 ************************************ 00:06:08.102 END TEST rpc_trace_cmd_test 00:06:08.102 ************************************ 00:06:08.102 22:45:06 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:06:08.102 22:45:06 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:06:08.102 22:45:06 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:06:08.102 22:45:06 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:08.103 22:45:06 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:08.103 22:45:06 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:08.362 ************************************ 00:06:08.362 START TEST rpc_daemon_integrity 00:06:08.362 ************************************ 00:06:08.362 22:45:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:06:08.362 22:45:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:08.362 22:45:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:08.362 22:45:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:08.362 22:45:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:08.362 22:45:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:08.362 22:45:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:08.362 22:45:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:08.362 22:45:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:08.362 22:45:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:08.362 22:45:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:08.362 22:45:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:08.362 22:45:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:06:08.362 22:45:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:08.362 22:45:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:08.362 22:45:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:08.362 22:45:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:08.362 22:45:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:08.362 { 00:06:08.362 "name": "Malloc2", 00:06:08.362 "aliases": [ 00:06:08.362 "f355436d-953b-41b5-aecc-037161a76374" 00:06:08.362 ], 00:06:08.362 "product_name": "Malloc disk", 00:06:08.362 "block_size": 512, 00:06:08.362 "num_blocks": 16384, 00:06:08.362 "uuid": "f355436d-953b-41b5-aecc-037161a76374", 00:06:08.362 "assigned_rate_limits": { 00:06:08.362 "rw_ios_per_sec": 0, 00:06:08.362 "rw_mbytes_per_sec": 0, 00:06:08.362 "r_mbytes_per_sec": 0, 00:06:08.362 "w_mbytes_per_sec": 0 00:06:08.362 }, 00:06:08.362 "claimed": false, 00:06:08.362 "zoned": false, 00:06:08.362 "supported_io_types": { 00:06:08.362 "read": true, 00:06:08.362 "write": true, 00:06:08.362 "unmap": true, 00:06:08.362 "flush": true, 00:06:08.362 "reset": true, 00:06:08.362 "nvme_admin": false, 00:06:08.362 "nvme_io": false, 00:06:08.362 "nvme_io_md": false, 00:06:08.362 "write_zeroes": true, 00:06:08.362 "zcopy": true, 00:06:08.362 "get_zone_info": false, 00:06:08.362 "zone_management": false, 00:06:08.362 "zone_append": false, 00:06:08.362 "compare": false, 00:06:08.362 "compare_and_write": false, 00:06:08.362 "abort": true, 00:06:08.362 "seek_hole": false, 00:06:08.362 "seek_data": false, 00:06:08.362 "copy": true, 00:06:08.362 "nvme_iov_md": false 00:06:08.362 }, 00:06:08.362 "memory_domains": [ 00:06:08.362 { 00:06:08.362 "dma_device_id": "system", 00:06:08.362 "dma_device_type": 1 00:06:08.362 }, 00:06:08.362 { 00:06:08.362 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:08.362 "dma_device_type": 2 00:06:08.362 } 00:06:08.362 ], 00:06:08.362 "driver_specific": {} 00:06:08.362 } 00:06:08.362 ]' 00:06:08.362 22:45:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:08.362 22:45:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:08.362 22:45:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:06:08.362 22:45:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:08.362 22:45:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:08.362 [2024-07-24 22:45:06.468272] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:06:08.362 [2024-07-24 22:45:06.468305] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:08.362 [2024-07-24 22:45:06.468318] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x4906a10 00:06:08.362 [2024-07-24 22:45:06.468324] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:08.362 [2024-07-24 22:45:06.469030] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:08.362 [2024-07-24 22:45:06.469050] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:08.362 Passthru0 00:06:08.362 22:45:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:08.362 22:45:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:08.362 22:45:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:08.362 22:45:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:08.362 22:45:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:08.362 22:45:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:08.362 { 00:06:08.362 "name": "Malloc2", 00:06:08.362 "aliases": [ 00:06:08.362 "f355436d-953b-41b5-aecc-037161a76374" 00:06:08.362 ], 00:06:08.362 "product_name": "Malloc disk", 00:06:08.362 "block_size": 512, 00:06:08.362 "num_blocks": 16384, 00:06:08.363 "uuid": "f355436d-953b-41b5-aecc-037161a76374", 00:06:08.363 "assigned_rate_limits": { 00:06:08.363 "rw_ios_per_sec": 0, 00:06:08.363 "rw_mbytes_per_sec": 0, 00:06:08.363 "r_mbytes_per_sec": 0, 00:06:08.363 "w_mbytes_per_sec": 0 00:06:08.363 }, 00:06:08.363 "claimed": true, 00:06:08.363 "claim_type": "exclusive_write", 00:06:08.363 "zoned": false, 00:06:08.363 "supported_io_types": { 00:06:08.363 "read": true, 00:06:08.363 "write": true, 00:06:08.363 "unmap": true, 00:06:08.363 "flush": true, 00:06:08.363 "reset": true, 00:06:08.363 "nvme_admin": false, 00:06:08.363 "nvme_io": false, 00:06:08.363 "nvme_io_md": false, 00:06:08.363 "write_zeroes": true, 00:06:08.363 "zcopy": true, 00:06:08.363 "get_zone_info": false, 00:06:08.363 "zone_management": false, 00:06:08.363 "zone_append": false, 00:06:08.363 "compare": false, 00:06:08.363 "compare_and_write": false, 00:06:08.363 "abort": true, 00:06:08.363 "seek_hole": false, 00:06:08.363 "seek_data": false, 00:06:08.363 "copy": true, 00:06:08.363 "nvme_iov_md": false 00:06:08.363 }, 00:06:08.363 "memory_domains": [ 00:06:08.363 { 00:06:08.363 "dma_device_id": "system", 00:06:08.363 "dma_device_type": 1 00:06:08.363 }, 00:06:08.363 { 00:06:08.363 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:08.363 "dma_device_type": 2 00:06:08.363 } 00:06:08.363 ], 00:06:08.363 "driver_specific": {} 00:06:08.363 }, 00:06:08.363 { 00:06:08.363 "name": "Passthru0", 00:06:08.363 "aliases": [ 00:06:08.363 "6da20ddf-5a47-51bf-b971-f8c6c9f348bd" 00:06:08.363 ], 00:06:08.363 "product_name": "passthru", 00:06:08.363 "block_size": 512, 00:06:08.363 "num_blocks": 16384, 00:06:08.363 "uuid": "6da20ddf-5a47-51bf-b971-f8c6c9f348bd", 00:06:08.363 "assigned_rate_limits": { 00:06:08.363 "rw_ios_per_sec": 0, 00:06:08.363 "rw_mbytes_per_sec": 0, 00:06:08.363 "r_mbytes_per_sec": 0, 00:06:08.363 "w_mbytes_per_sec": 0 00:06:08.363 }, 00:06:08.363 "claimed": false, 00:06:08.363 "zoned": false, 00:06:08.363 "supported_io_types": { 00:06:08.363 "read": true, 00:06:08.363 "write": true, 00:06:08.363 "unmap": true, 00:06:08.363 "flush": true, 00:06:08.363 "reset": true, 00:06:08.363 "nvme_admin": false, 00:06:08.363 "nvme_io": false, 00:06:08.363 "nvme_io_md": false, 00:06:08.363 "write_zeroes": true, 00:06:08.363 "zcopy": true, 00:06:08.363 "get_zone_info": false, 00:06:08.363 "zone_management": false, 00:06:08.363 "zone_append": false, 00:06:08.363 "compare": false, 00:06:08.363 "compare_and_write": false, 00:06:08.363 "abort": true, 00:06:08.363 "seek_hole": false, 00:06:08.363 "seek_data": false, 00:06:08.363 "copy": true, 00:06:08.363 "nvme_iov_md": false 00:06:08.363 }, 00:06:08.363 "memory_domains": [ 00:06:08.363 { 00:06:08.363 "dma_device_id": "system", 00:06:08.363 "dma_device_type": 1 00:06:08.363 }, 00:06:08.363 { 00:06:08.363 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:08.363 "dma_device_type": 2 00:06:08.363 } 00:06:08.363 ], 00:06:08.363 "driver_specific": { 00:06:08.363 "passthru": { 00:06:08.363 "name": "Passthru0", 00:06:08.363 "base_bdev_name": "Malloc2" 00:06:08.363 } 00:06:08.363 } 00:06:08.363 } 00:06:08.363 ]' 00:06:08.363 22:45:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:08.363 22:45:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:08.363 22:45:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:08.363 22:45:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:08.363 22:45:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:08.363 22:45:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:08.363 22:45:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:06:08.363 22:45:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:08.363 22:45:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:08.363 22:45:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:08.363 22:45:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:08.363 22:45:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:08.363 22:45:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:08.623 22:45:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:08.623 22:45:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:08.623 22:45:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:08.623 22:45:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:08.623 00:06:08.623 real 0m0.279s 00:06:08.623 user 0m0.186s 00:06:08.623 sys 0m0.027s 00:06:08.623 22:45:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:08.623 22:45:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:08.623 ************************************ 00:06:08.623 END TEST rpc_daemon_integrity 00:06:08.623 ************************************ 00:06:08.623 22:45:06 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:06:08.623 22:45:06 rpc -- rpc/rpc.sh@84 -- # killprocess 465219 00:06:08.623 22:45:06 rpc -- common/autotest_common.sh@950 -- # '[' -z 465219 ']' 00:06:08.623 22:45:06 rpc -- common/autotest_common.sh@954 -- # kill -0 465219 00:06:08.623 22:45:06 rpc -- common/autotest_common.sh@955 -- # uname 00:06:08.623 22:45:06 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:08.623 22:45:06 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 465219 00:06:08.623 22:45:06 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:08.623 22:45:06 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:08.623 22:45:06 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 465219' 00:06:08.623 killing process with pid 465219 00:06:08.623 22:45:06 rpc -- common/autotest_common.sh@969 -- # kill 465219 00:06:08.623 22:45:06 rpc -- common/autotest_common.sh@974 -- # wait 465219 00:06:08.883 00:06:08.883 real 0m2.433s 00:06:08.883 user 0m3.134s 00:06:08.883 sys 0m0.659s 00:06:08.883 22:45:06 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:08.883 22:45:06 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:08.883 ************************************ 00:06:08.883 END TEST rpc 00:06:08.883 ************************************ 00:06:08.883 22:45:07 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:06:08.883 22:45:07 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:08.883 22:45:07 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:08.883 22:45:07 -- common/autotest_common.sh@10 -- # set +x 00:06:08.883 ************************************ 00:06:08.883 START TEST skip_rpc 00:06:08.883 ************************************ 00:06:08.883 22:45:07 skip_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:06:09.142 * Looking for test storage... 00:06:09.142 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:06:09.142 22:45:07 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:06:09.142 22:45:07 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/log.txt 00:06:09.142 22:45:07 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:06:09.142 22:45:07 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:09.142 22:45:07 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:09.142 22:45:07 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:09.142 ************************************ 00:06:09.142 START TEST skip_rpc 00:06:09.142 ************************************ 00:06:09.142 22:45:07 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:06:09.142 22:45:07 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=465824 00:06:09.142 22:45:07 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:09.142 22:45:07 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:06:09.142 22:45:07 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:06:09.142 [2024-07-24 22:45:07.202099] Starting SPDK v24.09-pre git sha1 f41dbc235 / DPDK 24.03.0 initialization... 00:06:09.142 [2024-07-24 22:45:07.202169] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid465824 ] 00:06:09.142 EAL: No free 2048 kB hugepages reported on node 1 00:06:09.142 [2024-07-24 22:45:07.270884] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.142 [2024-07-24 22:45:07.344324] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.413 22:45:12 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:06:14.413 22:45:12 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:06:14.413 22:45:12 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:06:14.413 22:45:12 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:14.413 22:45:12 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:14.413 22:45:12 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:14.413 22:45:12 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:14.413 22:45:12 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:06:14.413 22:45:12 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:14.413 22:45:12 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:14.413 22:45:12 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:14.413 22:45:12 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:06:14.413 22:45:12 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:14.413 22:45:12 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:14.413 22:45:12 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:14.413 22:45:12 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:06:14.413 22:45:12 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 465824 00:06:14.413 22:45:12 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 465824 ']' 00:06:14.413 22:45:12 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 465824 00:06:14.413 22:45:12 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:06:14.413 22:45:12 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:14.413 22:45:12 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 465824 00:06:14.413 22:45:12 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:14.413 22:45:12 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:14.413 22:45:12 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 465824' 00:06:14.413 killing process with pid 465824 00:06:14.413 22:45:12 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 465824 00:06:14.413 22:45:12 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 465824 00:06:14.413 00:06:14.413 real 0m5.347s 00:06:14.413 user 0m5.103s 00:06:14.413 sys 0m0.263s 00:06:14.413 22:45:12 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:14.413 22:45:12 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:14.413 ************************************ 00:06:14.413 END TEST skip_rpc 00:06:14.413 ************************************ 00:06:14.413 22:45:12 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:06:14.413 22:45:12 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:14.413 22:45:12 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:14.413 22:45:12 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:14.413 ************************************ 00:06:14.413 START TEST skip_rpc_with_json 00:06:14.413 ************************************ 00:06:14.413 22:45:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:06:14.413 22:45:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:06:14.413 22:45:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=467212 00:06:14.413 22:45:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:14.413 22:45:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:14.413 22:45:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 467212 00:06:14.413 22:45:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 467212 ']' 00:06:14.413 22:45:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:14.413 22:45:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:14.413 22:45:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:14.413 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:14.413 22:45:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:14.413 22:45:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:14.413 [2024-07-24 22:45:12.615286] Starting SPDK v24.09-pre git sha1 f41dbc235 / DPDK 24.03.0 initialization... 00:06:14.413 [2024-07-24 22:45:12.615345] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid467212 ] 00:06:14.672 EAL: No free 2048 kB hugepages reported on node 1 00:06:14.673 [2024-07-24 22:45:12.685726] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.673 [2024-07-24 22:45:12.762784] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.240 22:45:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:15.240 22:45:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:06:15.240 22:45:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:06:15.240 22:45:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:15.240 22:45:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:15.240 [2024-07-24 22:45:13.444162] nvmf_rpc.c:2569:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:06:15.500 request: 00:06:15.500 { 00:06:15.500 "trtype": "tcp", 00:06:15.500 "method": "nvmf_get_transports", 00:06:15.500 "req_id": 1 00:06:15.500 } 00:06:15.500 Got JSON-RPC error response 00:06:15.500 response: 00:06:15.500 { 00:06:15.500 "code": -19, 00:06:15.500 "message": "No such device" 00:06:15.500 } 00:06:15.500 22:45:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:15.500 22:45:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:06:15.500 22:45:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:15.500 22:45:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:15.500 [2024-07-24 22:45:13.456254] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:15.500 22:45:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:15.500 22:45:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:06:15.500 22:45:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:15.500 22:45:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:15.500 22:45:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:15.500 22:45:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:06:15.500 { 00:06:15.500 "subsystems": [ 00:06:15.500 { 00:06:15.500 "subsystem": "scheduler", 00:06:15.500 "config": [ 00:06:15.500 { 00:06:15.500 "method": "framework_set_scheduler", 00:06:15.500 "params": { 00:06:15.500 "name": "static" 00:06:15.500 } 00:06:15.500 } 00:06:15.500 ] 00:06:15.500 }, 00:06:15.500 { 00:06:15.500 "subsystem": "vmd", 00:06:15.500 "config": [] 00:06:15.500 }, 00:06:15.500 { 00:06:15.500 "subsystem": "sock", 00:06:15.500 "config": [ 00:06:15.500 { 00:06:15.500 "method": "sock_set_default_impl", 00:06:15.500 "params": { 00:06:15.500 "impl_name": "posix" 00:06:15.500 } 00:06:15.500 }, 00:06:15.500 { 00:06:15.500 "method": "sock_impl_set_options", 00:06:15.500 "params": { 00:06:15.500 "impl_name": "ssl", 00:06:15.500 "recv_buf_size": 4096, 00:06:15.500 "send_buf_size": 4096, 00:06:15.500 "enable_recv_pipe": true, 00:06:15.500 "enable_quickack": false, 00:06:15.500 "enable_placement_id": 0, 00:06:15.500 "enable_zerocopy_send_server": true, 00:06:15.500 "enable_zerocopy_send_client": false, 00:06:15.500 "zerocopy_threshold": 0, 00:06:15.500 "tls_version": 0, 00:06:15.500 "enable_ktls": false 00:06:15.500 } 00:06:15.500 }, 00:06:15.500 { 00:06:15.500 "method": "sock_impl_set_options", 00:06:15.500 "params": { 00:06:15.500 "impl_name": "posix", 00:06:15.500 "recv_buf_size": 2097152, 00:06:15.500 "send_buf_size": 2097152, 00:06:15.500 "enable_recv_pipe": true, 00:06:15.500 "enable_quickack": false, 00:06:15.500 "enable_placement_id": 0, 00:06:15.500 "enable_zerocopy_send_server": true, 00:06:15.500 "enable_zerocopy_send_client": false, 00:06:15.500 "zerocopy_threshold": 0, 00:06:15.500 "tls_version": 0, 00:06:15.500 "enable_ktls": false 00:06:15.500 } 00:06:15.500 } 00:06:15.500 ] 00:06:15.500 }, 00:06:15.500 { 00:06:15.500 "subsystem": "iobuf", 00:06:15.500 "config": [ 00:06:15.500 { 00:06:15.500 "method": "iobuf_set_options", 00:06:15.500 "params": { 00:06:15.500 "small_pool_count": 8192, 00:06:15.500 "large_pool_count": 1024, 00:06:15.500 "small_bufsize": 8192, 00:06:15.500 "large_bufsize": 135168 00:06:15.500 } 00:06:15.500 } 00:06:15.500 ] 00:06:15.500 }, 00:06:15.500 { 00:06:15.500 "subsystem": "keyring", 00:06:15.500 "config": [] 00:06:15.500 }, 00:06:15.500 { 00:06:15.500 "subsystem": "vfio_user_target", 00:06:15.500 "config": null 00:06:15.500 }, 00:06:15.500 { 00:06:15.500 "subsystem": "accel", 00:06:15.500 "config": [ 00:06:15.500 { 00:06:15.500 "method": "accel_set_options", 00:06:15.500 "params": { 00:06:15.500 "small_cache_size": 128, 00:06:15.500 "large_cache_size": 16, 00:06:15.500 "task_count": 2048, 00:06:15.500 "sequence_count": 2048, 00:06:15.500 "buf_count": 2048 00:06:15.500 } 00:06:15.500 } 00:06:15.500 ] 00:06:15.500 }, 00:06:15.500 { 00:06:15.500 "subsystem": "bdev", 00:06:15.500 "config": [ 00:06:15.500 { 00:06:15.500 "method": "bdev_set_options", 00:06:15.500 "params": { 00:06:15.500 "bdev_io_pool_size": 65535, 00:06:15.500 "bdev_io_cache_size": 256, 00:06:15.500 "bdev_auto_examine": true, 00:06:15.500 "iobuf_small_cache_size": 128, 00:06:15.500 "iobuf_large_cache_size": 16 00:06:15.500 } 00:06:15.500 }, 00:06:15.500 { 00:06:15.500 "method": "bdev_raid_set_options", 00:06:15.500 "params": { 00:06:15.500 "process_window_size_kb": 1024, 00:06:15.500 "process_max_bandwidth_mb_sec": 0 00:06:15.500 } 00:06:15.500 }, 00:06:15.500 { 00:06:15.500 "method": "bdev_nvme_set_options", 00:06:15.500 "params": { 00:06:15.500 "action_on_timeout": "none", 00:06:15.500 "timeout_us": 0, 00:06:15.500 "timeout_admin_us": 0, 00:06:15.500 "keep_alive_timeout_ms": 10000, 00:06:15.500 "arbitration_burst": 0, 00:06:15.500 "low_priority_weight": 0, 00:06:15.500 "medium_priority_weight": 0, 00:06:15.500 "high_priority_weight": 0, 00:06:15.500 "nvme_adminq_poll_period_us": 10000, 00:06:15.500 "nvme_ioq_poll_period_us": 0, 00:06:15.500 "io_queue_requests": 0, 00:06:15.500 "delay_cmd_submit": true, 00:06:15.500 "transport_retry_count": 4, 00:06:15.500 "bdev_retry_count": 3, 00:06:15.500 "transport_ack_timeout": 0, 00:06:15.500 "ctrlr_loss_timeout_sec": 0, 00:06:15.500 "reconnect_delay_sec": 0, 00:06:15.500 "fast_io_fail_timeout_sec": 0, 00:06:15.500 "disable_auto_failback": false, 00:06:15.500 "generate_uuids": false, 00:06:15.500 "transport_tos": 0, 00:06:15.500 "nvme_error_stat": false, 00:06:15.500 "rdma_srq_size": 0, 00:06:15.500 "io_path_stat": false, 00:06:15.500 "allow_accel_sequence": false, 00:06:15.500 "rdma_max_cq_size": 0, 00:06:15.500 "rdma_cm_event_timeout_ms": 0, 00:06:15.500 "dhchap_digests": [ 00:06:15.500 "sha256", 00:06:15.500 "sha384", 00:06:15.500 "sha512" 00:06:15.501 ], 00:06:15.501 "dhchap_dhgroups": [ 00:06:15.501 "null", 00:06:15.501 "ffdhe2048", 00:06:15.501 "ffdhe3072", 00:06:15.501 "ffdhe4096", 00:06:15.501 "ffdhe6144", 00:06:15.501 "ffdhe8192" 00:06:15.501 ] 00:06:15.501 } 00:06:15.501 }, 00:06:15.501 { 00:06:15.501 "method": "bdev_nvme_set_hotplug", 00:06:15.501 "params": { 00:06:15.501 "period_us": 100000, 00:06:15.501 "enable": false 00:06:15.501 } 00:06:15.501 }, 00:06:15.501 { 00:06:15.501 "method": "bdev_iscsi_set_options", 00:06:15.501 "params": { 00:06:15.501 "timeout_sec": 30 00:06:15.501 } 00:06:15.501 }, 00:06:15.501 { 00:06:15.501 "method": "bdev_wait_for_examine" 00:06:15.501 } 00:06:15.501 ] 00:06:15.501 }, 00:06:15.501 { 00:06:15.501 "subsystem": "nvmf", 00:06:15.501 "config": [ 00:06:15.501 { 00:06:15.501 "method": "nvmf_set_config", 00:06:15.501 "params": { 00:06:15.501 "discovery_filter": "match_any", 00:06:15.501 "admin_cmd_passthru": { 00:06:15.501 "identify_ctrlr": false 00:06:15.501 } 00:06:15.501 } 00:06:15.501 }, 00:06:15.501 { 00:06:15.501 "method": "nvmf_set_max_subsystems", 00:06:15.501 "params": { 00:06:15.501 "max_subsystems": 1024 00:06:15.501 } 00:06:15.501 }, 00:06:15.501 { 00:06:15.501 "method": "nvmf_set_crdt", 00:06:15.501 "params": { 00:06:15.501 "crdt1": 0, 00:06:15.501 "crdt2": 0, 00:06:15.501 "crdt3": 0 00:06:15.501 } 00:06:15.501 }, 00:06:15.501 { 00:06:15.501 "method": "nvmf_create_transport", 00:06:15.501 "params": { 00:06:15.501 "trtype": "TCP", 00:06:15.501 "max_queue_depth": 128, 00:06:15.501 "max_io_qpairs_per_ctrlr": 127, 00:06:15.501 "in_capsule_data_size": 4096, 00:06:15.501 "max_io_size": 131072, 00:06:15.501 "io_unit_size": 131072, 00:06:15.501 "max_aq_depth": 128, 00:06:15.501 "num_shared_buffers": 511, 00:06:15.501 "buf_cache_size": 4294967295, 00:06:15.501 "dif_insert_or_strip": false, 00:06:15.501 "zcopy": false, 00:06:15.501 "c2h_success": true, 00:06:15.501 "sock_priority": 0, 00:06:15.501 "abort_timeout_sec": 1, 00:06:15.501 "ack_timeout": 0, 00:06:15.501 "data_wr_pool_size": 0 00:06:15.501 } 00:06:15.501 } 00:06:15.501 ] 00:06:15.501 }, 00:06:15.501 { 00:06:15.501 "subsystem": "nbd", 00:06:15.501 "config": [] 00:06:15.501 }, 00:06:15.501 { 00:06:15.501 "subsystem": "ublk", 00:06:15.501 "config": [] 00:06:15.501 }, 00:06:15.501 { 00:06:15.501 "subsystem": "vhost_blk", 00:06:15.501 "config": [] 00:06:15.501 }, 00:06:15.501 { 00:06:15.501 "subsystem": "scsi", 00:06:15.501 "config": null 00:06:15.501 }, 00:06:15.501 { 00:06:15.501 "subsystem": "iscsi", 00:06:15.501 "config": [ 00:06:15.501 { 00:06:15.501 "method": "iscsi_set_options", 00:06:15.501 "params": { 00:06:15.501 "node_base": "iqn.2016-06.io.spdk", 00:06:15.501 "max_sessions": 128, 00:06:15.501 "max_connections_per_session": 2, 00:06:15.501 "max_queue_depth": 64, 00:06:15.501 "default_time2wait": 2, 00:06:15.501 "default_time2retain": 20, 00:06:15.501 "first_burst_length": 8192, 00:06:15.501 "immediate_data": true, 00:06:15.501 "allow_duplicated_isid": false, 00:06:15.501 "error_recovery_level": 0, 00:06:15.501 "nop_timeout": 60, 00:06:15.501 "nop_in_interval": 30, 00:06:15.501 "disable_chap": false, 00:06:15.501 "require_chap": false, 00:06:15.501 "mutual_chap": false, 00:06:15.501 "chap_group": 0, 00:06:15.501 "max_large_datain_per_connection": 64, 00:06:15.501 "max_r2t_per_connection": 4, 00:06:15.501 "pdu_pool_size": 36864, 00:06:15.501 "immediate_data_pool_size": 16384, 00:06:15.501 "data_out_pool_size": 2048 00:06:15.501 } 00:06:15.501 } 00:06:15.501 ] 00:06:15.501 }, 00:06:15.501 { 00:06:15.501 "subsystem": "vhost_scsi", 00:06:15.501 "config": [] 00:06:15.501 } 00:06:15.501 ] 00:06:15.501 } 00:06:15.501 22:45:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:06:15.501 22:45:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 467212 00:06:15.501 22:45:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 467212 ']' 00:06:15.501 22:45:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 467212 00:06:15.501 22:45:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:06:15.501 22:45:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:15.501 22:45:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 467212 00:06:15.501 22:45:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:15.501 22:45:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:15.501 22:45:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 467212' 00:06:15.501 killing process with pid 467212 00:06:15.501 22:45:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 467212 00:06:15.501 22:45:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 467212 00:06:15.760 22:45:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=467432 00:06:15.760 22:45:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:06:15.760 22:45:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:06:21.032 22:45:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 467432 00:06:21.032 22:45:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 467432 ']' 00:06:21.032 22:45:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 467432 00:06:21.032 22:45:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:06:21.032 22:45:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:21.032 22:45:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 467432 00:06:21.032 22:45:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:21.032 22:45:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:21.032 22:45:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 467432' 00:06:21.032 killing process with pid 467432 00:06:21.032 22:45:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 467432 00:06:21.032 22:45:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 467432 00:06:21.293 22:45:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/log.txt 00:06:21.293 22:45:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/log.txt 00:06:21.293 00:06:21.293 real 0m6.711s 00:06:21.293 user 0m6.520s 00:06:21.293 sys 0m0.593s 00:06:21.293 22:45:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:21.293 22:45:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:21.293 ************************************ 00:06:21.293 END TEST skip_rpc_with_json 00:06:21.293 ************************************ 00:06:21.293 22:45:19 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:06:21.293 22:45:19 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:21.293 22:45:19 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:21.293 22:45:19 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:21.293 ************************************ 00:06:21.293 START TEST skip_rpc_with_delay 00:06:21.293 ************************************ 00:06:21.293 22:45:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:06:21.293 22:45:19 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:21.294 22:45:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:06:21.294 22:45:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:21.294 22:45:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:06:21.294 22:45:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:21.294 22:45:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:06:21.294 22:45:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:21.294 22:45:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:06:21.294 22:45:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:21.294 22:45:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:06:21.294 22:45:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:06:21.294 22:45:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:21.294 [2024-07-24 22:45:19.395663] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:06:21.294 [2024-07-24 22:45:19.395772] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:06:21.294 22:45:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:06:21.294 22:45:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:21.294 22:45:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:21.294 22:45:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:21.294 00:06:21.294 real 0m0.041s 00:06:21.294 user 0m0.026s 00:06:21.294 sys 0m0.015s 00:06:21.294 22:45:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:21.294 22:45:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:06:21.294 ************************************ 00:06:21.294 END TEST skip_rpc_with_delay 00:06:21.294 ************************************ 00:06:21.294 22:45:19 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:06:21.294 22:45:19 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:06:21.294 22:45:19 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:06:21.294 22:45:19 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:21.294 22:45:19 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:21.294 22:45:19 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:21.294 ************************************ 00:06:21.294 START TEST exit_on_failed_rpc_init 00:06:21.294 ************************************ 00:06:21.294 22:45:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:06:21.294 22:45:19 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=468341 00:06:21.294 22:45:19 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 468341 00:06:21.294 22:45:19 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:21.294 22:45:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 468341 ']' 00:06:21.294 22:45:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:21.294 22:45:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:21.294 22:45:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:21.294 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:21.294 22:45:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:21.294 22:45:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:21.554 [2024-07-24 22:45:19.501432] Starting SPDK v24.09-pre git sha1 f41dbc235 / DPDK 24.03.0 initialization... 00:06:21.554 [2024-07-24 22:45:19.501502] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid468341 ] 00:06:21.554 EAL: No free 2048 kB hugepages reported on node 1 00:06:21.554 [2024-07-24 22:45:19.572159] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.554 [2024-07-24 22:45:19.645656] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.121 22:45:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:22.121 22:45:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:06:22.121 22:45:20 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:22.121 22:45:20 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:22.381 22:45:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:06:22.381 22:45:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:22.381 22:45:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:06:22.381 22:45:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:22.381 22:45:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:06:22.381 22:45:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:22.381 22:45:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:06:22.381 22:45:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:22.381 22:45:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:06:22.381 22:45:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:06:22.381 22:45:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:22.381 [2024-07-24 22:45:20.354964] Starting SPDK v24.09-pre git sha1 f41dbc235 / DPDK 24.03.0 initialization... 00:06:22.381 [2024-07-24 22:45:20.355021] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid468558 ] 00:06:22.381 EAL: No free 2048 kB hugepages reported on node 1 00:06:22.381 [2024-07-24 22:45:20.421674] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.381 [2024-07-24 22:45:20.495204] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:22.381 [2024-07-24 22:45:20.495293] rpc.c: 181:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:06:22.381 [2024-07-24 22:45:20.495303] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:06:22.381 [2024-07-24 22:45:20.495309] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:22.381 22:45:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:06:22.381 22:45:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:22.381 22:45:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:06:22.381 22:45:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:06:22.381 22:45:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:06:22.381 22:45:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:22.381 22:45:20 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:22.381 22:45:20 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 468341 00:06:22.381 22:45:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 468341 ']' 00:06:22.381 22:45:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 468341 00:06:22.381 22:45:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:06:22.381 22:45:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:22.381 22:45:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 468341 00:06:22.640 22:45:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:22.640 22:45:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:22.640 22:45:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 468341' 00:06:22.640 killing process with pid 468341 00:06:22.640 22:45:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 468341 00:06:22.640 22:45:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 468341 00:06:22.899 00:06:22.899 real 0m1.423s 00:06:22.899 user 0m1.603s 00:06:22.899 sys 0m0.417s 00:06:22.899 22:45:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:22.899 22:45:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:22.899 ************************************ 00:06:22.899 END TEST exit_on_failed_rpc_init 00:06:22.899 ************************************ 00:06:22.899 22:45:20 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:06:22.899 00:06:22.899 real 0m13.887s 00:06:22.899 user 0m13.398s 00:06:22.899 sys 0m1.531s 00:06:22.899 22:45:20 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:22.899 22:45:20 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:22.899 ************************************ 00:06:22.899 END TEST skip_rpc 00:06:22.899 ************************************ 00:06:22.899 22:45:20 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:06:22.899 22:45:20 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:22.899 22:45:20 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:22.899 22:45:20 -- common/autotest_common.sh@10 -- # set +x 00:06:22.899 ************************************ 00:06:22.899 START TEST rpc_client 00:06:22.899 ************************************ 00:06:22.899 22:45:21 rpc_client -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:06:22.899 * Looking for test storage... 00:06:22.899 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client 00:06:22.899 22:45:21 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:06:22.899 OK 00:06:23.158 22:45:21 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:23.158 00:06:23.158 real 0m0.104s 00:06:23.158 user 0m0.047s 00:06:23.158 sys 0m0.064s 00:06:23.158 22:45:21 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:23.158 22:45:21 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:06:23.158 ************************************ 00:06:23.158 END TEST rpc_client 00:06:23.158 ************************************ 00:06:23.158 22:45:21 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config.sh 00:06:23.158 22:45:21 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:23.158 22:45:21 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:23.158 22:45:21 -- common/autotest_common.sh@10 -- # set +x 00:06:23.158 ************************************ 00:06:23.158 START TEST json_config 00:06:23.158 ************************************ 00:06:23.158 22:45:21 json_config -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config.sh 00:06:23.158 22:45:21 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh 00:06:23.158 22:45:21 json_config -- nvmf/common.sh@7 -- # uname -s 00:06:23.158 22:45:21 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:23.158 22:45:21 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:23.158 22:45:21 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:23.158 22:45:21 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:23.158 22:45:21 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:23.158 22:45:21 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:23.158 22:45:21 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:23.158 22:45:21 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:23.158 22:45:21 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:23.158 22:45:21 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:23.158 22:45:21 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0055cbfc-b15d-ea11-906e-0017a4403562 00:06:23.158 22:45:21 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=0055cbfc-b15d-ea11-906e-0017a4403562 00:06:23.158 22:45:21 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:23.158 22:45:21 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:23.158 22:45:21 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:23.158 22:45:21 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:23.158 22:45:21 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:06:23.158 22:45:21 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:23.158 22:45:21 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:23.158 22:45:21 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:23.158 22:45:21 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:23.158 22:45:21 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:23.158 22:45:21 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:23.158 22:45:21 json_config -- paths/export.sh@5 -- # export PATH 00:06:23.159 22:45:21 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:23.159 22:45:21 json_config -- nvmf/common.sh@47 -- # : 0 00:06:23.159 22:45:21 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:23.159 22:45:21 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:23.159 22:45:21 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:23.159 22:45:21 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:23.159 22:45:21 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:23.159 22:45:21 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:23.159 22:45:21 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:23.159 22:45:21 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:23.159 22:45:21 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/common.sh 00:06:23.159 22:45:21 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:06:23.159 22:45:21 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:06:23.159 22:45:21 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:06:23.159 22:45:21 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:23.159 22:45:21 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:06:23.159 WARNING: No tests are enabled so not running JSON configuration tests 00:06:23.159 22:45:21 json_config -- json_config/json_config.sh@28 -- # exit 0 00:06:23.159 00:06:23.159 real 0m0.088s 00:06:23.159 user 0m0.042s 00:06:23.159 sys 0m0.046s 00:06:23.159 22:45:21 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:23.159 22:45:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:23.159 ************************************ 00:06:23.159 END TEST json_config 00:06:23.159 ************************************ 00:06:23.159 22:45:21 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:23.159 22:45:21 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:23.159 22:45:21 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:23.159 22:45:21 -- common/autotest_common.sh@10 -- # set +x 00:06:23.159 ************************************ 00:06:23.159 START TEST json_config_extra_key 00:06:23.159 ************************************ 00:06:23.159 22:45:21 json_config_extra_key -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:23.419 22:45:21 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh 00:06:23.419 22:45:21 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:23.419 22:45:21 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:23.419 22:45:21 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:23.419 22:45:21 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:23.419 22:45:21 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:23.419 22:45:21 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:23.419 22:45:21 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:23.419 22:45:21 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:23.419 22:45:21 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:23.419 22:45:21 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:23.419 22:45:21 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:23.419 22:45:21 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0055cbfc-b15d-ea11-906e-0017a4403562 00:06:23.419 22:45:21 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=0055cbfc-b15d-ea11-906e-0017a4403562 00:06:23.419 22:45:21 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:23.419 22:45:21 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:23.419 22:45:21 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:23.419 22:45:21 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:23.419 22:45:21 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:06:23.419 22:45:21 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:23.419 22:45:21 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:23.419 22:45:21 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:23.419 22:45:21 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:23.419 22:45:21 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:23.419 22:45:21 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:23.419 22:45:21 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:23.419 22:45:21 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:23.419 22:45:21 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:06:23.419 22:45:21 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:23.419 22:45:21 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:23.419 22:45:21 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:23.419 22:45:21 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:23.419 22:45:21 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:23.419 22:45:21 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:23.419 22:45:21 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:23.419 22:45:21 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:23.419 22:45:21 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/common.sh 00:06:23.419 22:45:21 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:23.419 22:45:21 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:23.419 22:45:21 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:23.419 22:45:21 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:23.419 22:45:21 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:23.419 22:45:21 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:23.419 22:45:21 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/extra_key.json') 00:06:23.419 22:45:21 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:23.420 22:45:21 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:23.420 22:45:21 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:23.420 INFO: launching applications... 00:06:23.420 22:45:21 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/extra_key.json 00:06:23.420 22:45:21 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:23.420 22:45:21 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:23.420 22:45:21 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:23.420 22:45:21 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:23.420 22:45:21 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:23.420 22:45:21 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:23.420 22:45:21 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:23.420 22:45:21 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=468914 00:06:23.420 22:45:21 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:23.420 Waiting for target to run... 00:06:23.420 22:45:21 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 468914 /var/tmp/spdk_tgt.sock 00:06:23.420 22:45:21 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 468914 ']' 00:06:23.420 22:45:21 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/extra_key.json 00:06:23.420 22:45:21 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:23.420 22:45:21 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:23.420 22:45:21 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:23.420 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:23.420 22:45:21 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:23.420 22:45:21 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:23.420 [2024-07-24 22:45:21.441575] Starting SPDK v24.09-pre git sha1 f41dbc235 / DPDK 24.03.0 initialization... 00:06:23.420 [2024-07-24 22:45:21.441647] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid468914 ] 00:06:23.420 EAL: No free 2048 kB hugepages reported on node 1 00:06:23.679 [2024-07-24 22:45:21.882879] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.937 [2024-07-24 22:45:21.966057] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.195 22:45:22 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:24.195 22:45:22 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:06:24.195 22:45:22 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:24.195 00:06:24.195 22:45:22 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:24.195 INFO: shutting down applications... 00:06:24.195 22:45:22 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:24.195 22:45:22 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:24.195 22:45:22 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:24.195 22:45:22 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 468914 ]] 00:06:24.195 22:45:22 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 468914 00:06:24.195 22:45:22 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:24.195 22:45:22 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:24.195 22:45:22 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 468914 00:06:24.195 22:45:22 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:24.763 22:45:22 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:24.763 22:45:22 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:24.763 22:45:22 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 468914 00:06:24.763 22:45:22 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:24.763 22:45:22 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:24.763 22:45:22 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:24.763 22:45:22 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:24.763 SPDK target shutdown done 00:06:24.763 22:45:22 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:24.763 Success 00:06:24.763 00:06:24.763 real 0m1.448s 00:06:24.763 user 0m1.069s 00:06:24.763 sys 0m0.518s 00:06:24.763 22:45:22 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:24.763 22:45:22 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:24.763 ************************************ 00:06:24.763 END TEST json_config_extra_key 00:06:24.763 ************************************ 00:06:24.763 22:45:22 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:24.763 22:45:22 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:24.764 22:45:22 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:24.764 22:45:22 -- common/autotest_common.sh@10 -- # set +x 00:06:24.764 ************************************ 00:06:24.764 START TEST alias_rpc 00:06:24.764 ************************************ 00:06:24.764 22:45:22 alias_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:24.764 * Looking for test storage... 00:06:24.764 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/alias_rpc 00:06:24.764 22:45:22 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:24.764 22:45:22 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=469173 00:06:24.764 22:45:22 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 469173 00:06:24.764 22:45:22 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:06:24.764 22:45:22 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 469173 ']' 00:06:24.764 22:45:22 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:24.764 22:45:22 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:24.764 22:45:22 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:24.764 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:24.764 22:45:22 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:24.764 22:45:22 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:24.764 [2024-07-24 22:45:22.955791] Starting SPDK v24.09-pre git sha1 f41dbc235 / DPDK 24.03.0 initialization... 00:06:24.764 [2024-07-24 22:45:22.955901] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid469173 ] 00:06:25.022 EAL: No free 2048 kB hugepages reported on node 1 00:06:25.022 [2024-07-24 22:45:23.024863] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.023 [2024-07-24 22:45:23.106214] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.590 22:45:23 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:25.590 22:45:23 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:25.590 22:45:23 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py load_config -i 00:06:25.849 22:45:23 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 469173 00:06:25.849 22:45:23 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 469173 ']' 00:06:25.849 22:45:23 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 469173 00:06:25.849 22:45:23 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:06:25.849 22:45:23 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:25.849 22:45:23 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 469173 00:06:25.849 22:45:24 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:25.849 22:45:24 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:25.849 22:45:24 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 469173' 00:06:25.849 killing process with pid 469173 00:06:25.849 22:45:24 alias_rpc -- common/autotest_common.sh@969 -- # kill 469173 00:06:25.849 22:45:24 alias_rpc -- common/autotest_common.sh@974 -- # wait 469173 00:06:26.417 00:06:26.417 real 0m1.491s 00:06:26.417 user 0m1.621s 00:06:26.417 sys 0m0.423s 00:06:26.417 22:45:24 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:26.417 22:45:24 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:26.417 ************************************ 00:06:26.417 END TEST alias_rpc 00:06:26.417 ************************************ 00:06:26.417 22:45:24 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:06:26.417 22:45:24 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:26.417 22:45:24 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:26.417 22:45:24 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:26.417 22:45:24 -- common/autotest_common.sh@10 -- # set +x 00:06:26.417 ************************************ 00:06:26.417 START TEST spdkcli_tcp 00:06:26.417 ************************************ 00:06:26.417 22:45:24 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:26.417 * Looking for test storage... 00:06:26.417 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli 00:06:26.417 22:45:24 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/common.sh 00:06:26.417 22:45:24 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:06:26.417 22:45:24 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/clear_config.py 00:06:26.417 22:45:24 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:26.417 22:45:24 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:26.417 22:45:24 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:26.417 22:45:24 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:26.417 22:45:24 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:26.417 22:45:24 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:26.417 22:45:24 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=469449 00:06:26.417 22:45:24 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:26.417 22:45:24 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 469449 00:06:26.417 22:45:24 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 469449 ']' 00:06:26.417 22:45:24 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:26.417 22:45:24 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:26.417 22:45:24 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:26.417 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:26.417 22:45:24 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:26.417 22:45:24 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:26.417 [2024-07-24 22:45:24.518421] Starting SPDK v24.09-pre git sha1 f41dbc235 / DPDK 24.03.0 initialization... 00:06:26.417 [2024-07-24 22:45:24.518494] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid469449 ] 00:06:26.417 EAL: No free 2048 kB hugepages reported on node 1 00:06:26.417 [2024-07-24 22:45:24.590315] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:26.708 [2024-07-24 22:45:24.672746] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:26.708 [2024-07-24 22:45:24.672746] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.275 22:45:25 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:27.275 22:45:25 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:06:27.275 22:45:25 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=469661 00:06:27.275 22:45:25 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:27.275 22:45:25 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:27.535 [ 00:06:27.535 "spdk_get_version", 00:06:27.535 "rpc_get_methods", 00:06:27.535 "trace_get_info", 00:06:27.535 "trace_get_tpoint_group_mask", 00:06:27.535 "trace_disable_tpoint_group", 00:06:27.535 "trace_enable_tpoint_group", 00:06:27.535 "trace_clear_tpoint_mask", 00:06:27.535 "trace_set_tpoint_mask", 00:06:27.535 "vfu_tgt_set_base_path", 00:06:27.535 "framework_get_pci_devices", 00:06:27.535 "framework_get_config", 00:06:27.535 "framework_get_subsystems", 00:06:27.535 "keyring_get_keys", 00:06:27.535 "iobuf_get_stats", 00:06:27.535 "iobuf_set_options", 00:06:27.535 "sock_get_default_impl", 00:06:27.535 "sock_set_default_impl", 00:06:27.535 "sock_impl_set_options", 00:06:27.535 "sock_impl_get_options", 00:06:27.535 "vmd_rescan", 00:06:27.535 "vmd_remove_device", 00:06:27.535 "vmd_enable", 00:06:27.535 "accel_get_stats", 00:06:27.535 "accel_set_options", 00:06:27.535 "accel_set_driver", 00:06:27.535 "accel_crypto_key_destroy", 00:06:27.535 "accel_crypto_keys_get", 00:06:27.535 "accel_crypto_key_create", 00:06:27.535 "accel_assign_opc", 00:06:27.535 "accel_get_module_info", 00:06:27.535 "accel_get_opc_assignments", 00:06:27.535 "notify_get_notifications", 00:06:27.535 "notify_get_types", 00:06:27.535 "bdev_get_histogram", 00:06:27.535 "bdev_enable_histogram", 00:06:27.535 "bdev_set_qos_limit", 00:06:27.535 "bdev_set_qd_sampling_period", 00:06:27.535 "bdev_get_bdevs", 00:06:27.535 "bdev_reset_iostat", 00:06:27.535 "bdev_get_iostat", 00:06:27.535 "bdev_examine", 00:06:27.535 "bdev_wait_for_examine", 00:06:27.535 "bdev_set_options", 00:06:27.535 "scsi_get_devices", 00:06:27.535 "thread_set_cpumask", 00:06:27.535 "framework_get_governor", 00:06:27.535 "framework_get_scheduler", 00:06:27.535 "framework_set_scheduler", 00:06:27.535 "framework_get_reactors", 00:06:27.535 "thread_get_io_channels", 00:06:27.535 "thread_get_pollers", 00:06:27.535 "thread_get_stats", 00:06:27.535 "framework_monitor_context_switch", 00:06:27.535 "spdk_kill_instance", 00:06:27.535 "log_enable_timestamps", 00:06:27.535 "log_get_flags", 00:06:27.535 "log_clear_flag", 00:06:27.535 "log_set_flag", 00:06:27.535 "log_get_level", 00:06:27.535 "log_set_level", 00:06:27.535 "log_get_print_level", 00:06:27.535 "log_set_print_level", 00:06:27.535 "framework_enable_cpumask_locks", 00:06:27.535 "framework_disable_cpumask_locks", 00:06:27.535 "framework_wait_init", 00:06:27.535 "framework_start_init", 00:06:27.535 "virtio_blk_create_transport", 00:06:27.535 "virtio_blk_get_transports", 00:06:27.535 "vhost_controller_set_coalescing", 00:06:27.535 "vhost_get_controllers", 00:06:27.535 "vhost_delete_controller", 00:06:27.535 "vhost_create_blk_controller", 00:06:27.535 "vhost_scsi_controller_remove_target", 00:06:27.535 "vhost_scsi_controller_add_target", 00:06:27.535 "vhost_start_scsi_controller", 00:06:27.535 "vhost_create_scsi_controller", 00:06:27.535 "ublk_recover_disk", 00:06:27.535 "ublk_get_disks", 00:06:27.535 "ublk_stop_disk", 00:06:27.535 "ublk_start_disk", 00:06:27.535 "ublk_destroy_target", 00:06:27.535 "ublk_create_target", 00:06:27.535 "nbd_get_disks", 00:06:27.535 "nbd_stop_disk", 00:06:27.535 "nbd_start_disk", 00:06:27.535 "env_dpdk_get_mem_stats", 00:06:27.535 "nvmf_stop_mdns_prr", 00:06:27.535 "nvmf_publish_mdns_prr", 00:06:27.535 "nvmf_subsystem_get_listeners", 00:06:27.535 "nvmf_subsystem_get_qpairs", 00:06:27.535 "nvmf_subsystem_get_controllers", 00:06:27.535 "nvmf_get_stats", 00:06:27.535 "nvmf_get_transports", 00:06:27.535 "nvmf_create_transport", 00:06:27.535 "nvmf_get_targets", 00:06:27.535 "nvmf_delete_target", 00:06:27.535 "nvmf_create_target", 00:06:27.535 "nvmf_subsystem_allow_any_host", 00:06:27.535 "nvmf_subsystem_remove_host", 00:06:27.535 "nvmf_subsystem_add_host", 00:06:27.535 "nvmf_ns_remove_host", 00:06:27.535 "nvmf_ns_add_host", 00:06:27.535 "nvmf_subsystem_remove_ns", 00:06:27.535 "nvmf_subsystem_add_ns", 00:06:27.535 "nvmf_subsystem_listener_set_ana_state", 00:06:27.535 "nvmf_discovery_get_referrals", 00:06:27.535 "nvmf_discovery_remove_referral", 00:06:27.535 "nvmf_discovery_add_referral", 00:06:27.535 "nvmf_subsystem_remove_listener", 00:06:27.535 "nvmf_subsystem_add_listener", 00:06:27.535 "nvmf_delete_subsystem", 00:06:27.535 "nvmf_create_subsystem", 00:06:27.535 "nvmf_get_subsystems", 00:06:27.535 "nvmf_set_crdt", 00:06:27.535 "nvmf_set_config", 00:06:27.535 "nvmf_set_max_subsystems", 00:06:27.535 "iscsi_get_histogram", 00:06:27.535 "iscsi_enable_histogram", 00:06:27.535 "iscsi_set_options", 00:06:27.535 "iscsi_get_auth_groups", 00:06:27.535 "iscsi_auth_group_remove_secret", 00:06:27.535 "iscsi_auth_group_add_secret", 00:06:27.535 "iscsi_delete_auth_group", 00:06:27.535 "iscsi_create_auth_group", 00:06:27.535 "iscsi_set_discovery_auth", 00:06:27.535 "iscsi_get_options", 00:06:27.535 "iscsi_target_node_request_logout", 00:06:27.535 "iscsi_target_node_set_redirect", 00:06:27.535 "iscsi_target_node_set_auth", 00:06:27.535 "iscsi_target_node_add_lun", 00:06:27.535 "iscsi_get_stats", 00:06:27.535 "iscsi_get_connections", 00:06:27.535 "iscsi_portal_group_set_auth", 00:06:27.535 "iscsi_start_portal_group", 00:06:27.535 "iscsi_delete_portal_group", 00:06:27.535 "iscsi_create_portal_group", 00:06:27.535 "iscsi_get_portal_groups", 00:06:27.535 "iscsi_delete_target_node", 00:06:27.535 "iscsi_target_node_remove_pg_ig_maps", 00:06:27.535 "iscsi_target_node_add_pg_ig_maps", 00:06:27.535 "iscsi_create_target_node", 00:06:27.535 "iscsi_get_target_nodes", 00:06:27.535 "iscsi_delete_initiator_group", 00:06:27.535 "iscsi_initiator_group_remove_initiators", 00:06:27.535 "iscsi_initiator_group_add_initiators", 00:06:27.535 "iscsi_create_initiator_group", 00:06:27.535 "iscsi_get_initiator_groups", 00:06:27.535 "keyring_linux_set_options", 00:06:27.535 "keyring_file_remove_key", 00:06:27.535 "keyring_file_add_key", 00:06:27.536 "vfu_virtio_create_scsi_endpoint", 00:06:27.536 "vfu_virtio_scsi_remove_target", 00:06:27.536 "vfu_virtio_scsi_add_target", 00:06:27.536 "vfu_virtio_create_blk_endpoint", 00:06:27.536 "vfu_virtio_delete_endpoint", 00:06:27.536 "iaa_scan_accel_module", 00:06:27.536 "dsa_scan_accel_module", 00:06:27.536 "ioat_scan_accel_module", 00:06:27.536 "accel_error_inject_error", 00:06:27.536 "bdev_iscsi_delete", 00:06:27.536 "bdev_iscsi_create", 00:06:27.536 "bdev_iscsi_set_options", 00:06:27.536 "bdev_virtio_attach_controller", 00:06:27.536 "bdev_virtio_scsi_get_devices", 00:06:27.536 "bdev_virtio_detach_controller", 00:06:27.536 "bdev_virtio_blk_set_hotplug", 00:06:27.536 "bdev_ftl_set_property", 00:06:27.536 "bdev_ftl_get_properties", 00:06:27.536 "bdev_ftl_get_stats", 00:06:27.536 "bdev_ftl_unmap", 00:06:27.536 "bdev_ftl_unload", 00:06:27.536 "bdev_ftl_delete", 00:06:27.536 "bdev_ftl_load", 00:06:27.536 "bdev_ftl_create", 00:06:27.536 "bdev_aio_delete", 00:06:27.536 "bdev_aio_rescan", 00:06:27.536 "bdev_aio_create", 00:06:27.536 "blobfs_create", 00:06:27.536 "blobfs_detect", 00:06:27.536 "blobfs_set_cache_size", 00:06:27.536 "bdev_zone_block_delete", 00:06:27.536 "bdev_zone_block_create", 00:06:27.536 "bdev_delay_delete", 00:06:27.536 "bdev_delay_create", 00:06:27.536 "bdev_delay_update_latency", 00:06:27.536 "bdev_split_delete", 00:06:27.536 "bdev_split_create", 00:06:27.536 "bdev_error_inject_error", 00:06:27.536 "bdev_error_delete", 00:06:27.536 "bdev_error_create", 00:06:27.536 "bdev_raid_set_options", 00:06:27.536 "bdev_raid_remove_base_bdev", 00:06:27.536 "bdev_raid_add_base_bdev", 00:06:27.536 "bdev_raid_delete", 00:06:27.536 "bdev_raid_create", 00:06:27.536 "bdev_raid_get_bdevs", 00:06:27.536 "bdev_lvol_set_parent_bdev", 00:06:27.536 "bdev_lvol_set_parent", 00:06:27.536 "bdev_lvol_check_shallow_copy", 00:06:27.536 "bdev_lvol_start_shallow_copy", 00:06:27.536 "bdev_lvol_grow_lvstore", 00:06:27.536 "bdev_lvol_get_lvols", 00:06:27.536 "bdev_lvol_get_lvstores", 00:06:27.536 "bdev_lvol_delete", 00:06:27.536 "bdev_lvol_set_read_only", 00:06:27.536 "bdev_lvol_resize", 00:06:27.536 "bdev_lvol_decouple_parent", 00:06:27.536 "bdev_lvol_inflate", 00:06:27.536 "bdev_lvol_rename", 00:06:27.536 "bdev_lvol_clone_bdev", 00:06:27.536 "bdev_lvol_clone", 00:06:27.536 "bdev_lvol_snapshot", 00:06:27.536 "bdev_lvol_create", 00:06:27.536 "bdev_lvol_delete_lvstore", 00:06:27.536 "bdev_lvol_rename_lvstore", 00:06:27.536 "bdev_lvol_create_lvstore", 00:06:27.536 "bdev_passthru_delete", 00:06:27.536 "bdev_passthru_create", 00:06:27.536 "bdev_nvme_cuse_unregister", 00:06:27.536 "bdev_nvme_cuse_register", 00:06:27.536 "bdev_opal_new_user", 00:06:27.536 "bdev_opal_set_lock_state", 00:06:27.536 "bdev_opal_delete", 00:06:27.536 "bdev_opal_get_info", 00:06:27.536 "bdev_opal_create", 00:06:27.536 "bdev_nvme_opal_revert", 00:06:27.536 "bdev_nvme_opal_init", 00:06:27.536 "bdev_nvme_send_cmd", 00:06:27.536 "bdev_nvme_get_path_iostat", 00:06:27.536 "bdev_nvme_get_mdns_discovery_info", 00:06:27.536 "bdev_nvme_stop_mdns_discovery", 00:06:27.536 "bdev_nvme_start_mdns_discovery", 00:06:27.536 "bdev_nvme_set_multipath_policy", 00:06:27.536 "bdev_nvme_set_preferred_path", 00:06:27.536 "bdev_nvme_get_io_paths", 00:06:27.536 "bdev_nvme_remove_error_injection", 00:06:27.536 "bdev_nvme_add_error_injection", 00:06:27.536 "bdev_nvme_get_discovery_info", 00:06:27.536 "bdev_nvme_stop_discovery", 00:06:27.536 "bdev_nvme_start_discovery", 00:06:27.536 "bdev_nvme_get_controller_health_info", 00:06:27.536 "bdev_nvme_disable_controller", 00:06:27.536 "bdev_nvme_enable_controller", 00:06:27.536 "bdev_nvme_reset_controller", 00:06:27.536 "bdev_nvme_get_transport_statistics", 00:06:27.536 "bdev_nvme_apply_firmware", 00:06:27.536 "bdev_nvme_detach_controller", 00:06:27.536 "bdev_nvme_get_controllers", 00:06:27.536 "bdev_nvme_attach_controller", 00:06:27.536 "bdev_nvme_set_hotplug", 00:06:27.536 "bdev_nvme_set_options", 00:06:27.536 "bdev_null_resize", 00:06:27.536 "bdev_null_delete", 00:06:27.536 "bdev_null_create", 00:06:27.536 "bdev_malloc_delete", 00:06:27.536 "bdev_malloc_create" 00:06:27.536 ] 00:06:27.536 22:45:25 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:27.536 22:45:25 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:27.536 22:45:25 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:27.536 22:45:25 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:27.536 22:45:25 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 469449 00:06:27.536 22:45:25 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 469449 ']' 00:06:27.536 22:45:25 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 469449 00:06:27.536 22:45:25 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:06:27.536 22:45:25 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:27.536 22:45:25 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 469449 00:06:27.536 22:45:25 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:27.536 22:45:25 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:27.536 22:45:25 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 469449' 00:06:27.536 killing process with pid 469449 00:06:27.536 22:45:25 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 469449 00:06:27.536 22:45:25 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 469449 00:06:27.796 00:06:27.796 real 0m1.485s 00:06:27.796 user 0m2.765s 00:06:27.796 sys 0m0.435s 00:06:27.796 22:45:25 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:27.796 22:45:25 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:27.796 ************************************ 00:06:27.796 END TEST spdkcli_tcp 00:06:27.796 ************************************ 00:06:27.796 22:45:25 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:27.796 22:45:25 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:27.796 22:45:25 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:27.796 22:45:25 -- common/autotest_common.sh@10 -- # set +x 00:06:27.796 ************************************ 00:06:27.796 START TEST dpdk_mem_utility 00:06:27.796 ************************************ 00:06:27.796 22:45:25 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:28.054 * Looking for test storage... 00:06:28.054 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/dpdk_memory_utility 00:06:28.054 22:45:26 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:28.054 22:45:26 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=469729 00:06:28.054 22:45:26 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 469729 00:06:28.054 22:45:26 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:06:28.054 22:45:26 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 469729 ']' 00:06:28.055 22:45:26 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:28.055 22:45:26 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:28.055 22:45:26 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:28.055 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:28.055 22:45:26 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:28.055 22:45:26 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:28.055 [2024-07-24 22:45:26.059090] Starting SPDK v24.09-pre git sha1 f41dbc235 / DPDK 24.03.0 initialization... 00:06:28.055 [2024-07-24 22:45:26.059151] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid469729 ] 00:06:28.055 EAL: No free 2048 kB hugepages reported on node 1 00:06:28.055 [2024-07-24 22:45:26.126843] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.055 [2024-07-24 22:45:26.207096] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.991 22:45:26 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:28.991 22:45:26 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:06:28.991 22:45:26 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:28.991 22:45:26 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:28.991 22:45:26 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:28.991 22:45:26 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:28.991 { 00:06:28.991 "filename": "/tmp/spdk_mem_dump.txt" 00:06:28.991 } 00:06:28.991 22:45:26 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:28.991 22:45:26 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:28.991 DPDK memory size 814.000000 MiB in 1 heap(s) 00:06:28.991 1 heaps totaling size 814.000000 MiB 00:06:28.991 size: 814.000000 MiB heap id: 0 00:06:28.991 end heaps---------- 00:06:28.991 8 mempools totaling size 598.116089 MiB 00:06:28.991 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:28.991 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:28.991 size: 84.521057 MiB name: bdev_io_469729 00:06:28.991 size: 51.011292 MiB name: evtpool_469729 00:06:28.991 size: 50.003479 MiB name: msgpool_469729 00:06:28.991 size: 21.763794 MiB name: PDU_Pool 00:06:28.991 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:28.991 size: 0.026123 MiB name: Session_Pool 00:06:28.991 end mempools------- 00:06:28.991 6 memzones totaling size 4.142822 MiB 00:06:28.991 size: 1.000366 MiB name: RG_ring_0_469729 00:06:28.991 size: 1.000366 MiB name: RG_ring_1_469729 00:06:28.991 size: 1.000366 MiB name: RG_ring_4_469729 00:06:28.991 size: 1.000366 MiB name: RG_ring_5_469729 00:06:28.991 size: 0.125366 MiB name: RG_ring_2_469729 00:06:28.992 size: 0.015991 MiB name: RG_ring_3_469729 00:06:28.992 end memzones------- 00:06:28.992 22:45:26 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:06:28.992 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:06:28.992 list of free elements. size: 12.519348 MiB 00:06:28.992 element at address: 0x200000400000 with size: 1.999512 MiB 00:06:28.992 element at address: 0x200018e00000 with size: 0.999878 MiB 00:06:28.992 element at address: 0x200019000000 with size: 0.999878 MiB 00:06:28.992 element at address: 0x200003e00000 with size: 0.996277 MiB 00:06:28.992 element at address: 0x200031c00000 with size: 0.994446 MiB 00:06:28.992 element at address: 0x200013800000 with size: 0.978699 MiB 00:06:28.992 element at address: 0x200007000000 with size: 0.959839 MiB 00:06:28.992 element at address: 0x200019200000 with size: 0.936584 MiB 00:06:28.992 element at address: 0x200000200000 with size: 0.841614 MiB 00:06:28.992 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:06:28.992 element at address: 0x20000b200000 with size: 0.490723 MiB 00:06:28.992 element at address: 0x200000800000 with size: 0.487793 MiB 00:06:28.992 element at address: 0x200019400000 with size: 0.485657 MiB 00:06:28.992 element at address: 0x200027e00000 with size: 0.410034 MiB 00:06:28.992 element at address: 0x200003a00000 with size: 0.355530 MiB 00:06:28.992 list of standard malloc elements. size: 199.218079 MiB 00:06:28.992 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:06:28.992 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:06:28.992 element at address: 0x200018efff80 with size: 1.000122 MiB 00:06:28.992 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:06:28.992 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:06:28.992 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:28.992 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:06:28.992 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:28.992 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:06:28.992 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:06:28.992 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:06:28.992 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:06:28.992 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:06:28.992 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:06:28.992 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:28.992 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:28.992 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:06:28.992 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:06:28.992 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:06:28.992 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:06:28.992 element at address: 0x200003adb300 with size: 0.000183 MiB 00:06:28.992 element at address: 0x200003adb500 with size: 0.000183 MiB 00:06:28.992 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:06:28.992 element at address: 0x200003affa80 with size: 0.000183 MiB 00:06:28.992 element at address: 0x200003affb40 with size: 0.000183 MiB 00:06:28.992 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:06:28.992 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:06:28.992 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:06:28.992 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:06:28.992 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:06:28.992 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:06:28.992 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:06:28.992 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:06:28.992 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:06:28.992 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:06:28.992 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:06:28.992 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:06:28.992 element at address: 0x200027e69040 with size: 0.000183 MiB 00:06:28.992 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:06:28.992 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:06:28.992 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:06:28.992 list of memzone associated elements. size: 602.262573 MiB 00:06:28.992 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:06:28.992 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:28.992 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:06:28.992 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:28.992 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:06:28.992 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_469729_0 00:06:28.992 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:06:28.992 associated memzone info: size: 48.002930 MiB name: MP_evtpool_469729_0 00:06:28.992 element at address: 0x200003fff380 with size: 48.003052 MiB 00:06:28.992 associated memzone info: size: 48.002930 MiB name: MP_msgpool_469729_0 00:06:28.992 element at address: 0x2000195be940 with size: 20.255554 MiB 00:06:28.992 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:28.992 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:06:28.992 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:28.992 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:06:28.992 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_469729 00:06:28.992 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:06:28.992 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_469729 00:06:28.992 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:28.992 associated memzone info: size: 1.007996 MiB name: MP_evtpool_469729 00:06:28.992 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:06:28.992 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:28.992 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:06:28.992 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:28.992 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:06:28.992 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:28.992 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:06:28.992 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:28.992 element at address: 0x200003eff180 with size: 1.000488 MiB 00:06:28.992 associated memzone info: size: 1.000366 MiB name: RG_ring_0_469729 00:06:28.992 element at address: 0x200003affc00 with size: 1.000488 MiB 00:06:28.992 associated memzone info: size: 1.000366 MiB name: RG_ring_1_469729 00:06:28.992 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:06:28.992 associated memzone info: size: 1.000366 MiB name: RG_ring_4_469729 00:06:28.992 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:06:28.992 associated memzone info: size: 1.000366 MiB name: RG_ring_5_469729 00:06:28.992 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:06:28.992 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_469729 00:06:28.992 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:06:28.992 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:28.992 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:06:28.992 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:28.992 element at address: 0x20001947c540 with size: 0.250488 MiB 00:06:28.992 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:28.992 element at address: 0x200003adf880 with size: 0.125488 MiB 00:06:28.992 associated memzone info: size: 0.125366 MiB name: RG_ring_2_469729 00:06:28.992 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:06:28.992 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:28.992 element at address: 0x200027e69100 with size: 0.023743 MiB 00:06:28.992 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:28.992 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:06:28.992 associated memzone info: size: 0.015991 MiB name: RG_ring_3_469729 00:06:28.992 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:06:28.992 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:28.992 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:06:28.992 associated memzone info: size: 0.000183 MiB name: MP_msgpool_469729 00:06:28.992 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:06:28.992 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_469729 00:06:28.992 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:06:28.992 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:28.992 22:45:26 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:28.992 22:45:26 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 469729 00:06:28.992 22:45:26 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 469729 ']' 00:06:28.992 22:45:26 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 469729 00:06:28.992 22:45:26 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:06:28.992 22:45:26 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:28.992 22:45:26 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 469729 00:06:28.992 22:45:27 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:28.992 22:45:27 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:28.992 22:45:27 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 469729' 00:06:28.992 killing process with pid 469729 00:06:28.992 22:45:27 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 469729 00:06:28.992 22:45:27 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 469729 00:06:29.252 00:06:29.252 real 0m1.369s 00:06:29.252 user 0m1.431s 00:06:29.252 sys 0m0.396s 00:06:29.252 22:45:27 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:29.252 22:45:27 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:29.252 ************************************ 00:06:29.252 END TEST dpdk_mem_utility 00:06:29.252 ************************************ 00:06:29.252 22:45:27 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event.sh 00:06:29.252 22:45:27 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:29.252 22:45:27 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:29.252 22:45:27 -- common/autotest_common.sh@10 -- # set +x 00:06:29.252 ************************************ 00:06:29.252 START TEST event 00:06:29.252 ************************************ 00:06:29.252 22:45:27 event -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event.sh 00:06:29.511 * Looking for test storage... 00:06:29.511 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event 00:06:29.511 22:45:27 event -- event/event.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/bdev/nbd_common.sh 00:06:29.511 22:45:27 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:29.511 22:45:27 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:29.511 22:45:27 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:06:29.511 22:45:27 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:29.511 22:45:27 event -- common/autotest_common.sh@10 -- # set +x 00:06:29.511 ************************************ 00:06:29.511 START TEST event_perf 00:06:29.511 ************************************ 00:06:29.511 22:45:27 event.event_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:29.511 Running I/O for 1 seconds...[2024-07-24 22:45:27.522499] Starting SPDK v24.09-pre git sha1 f41dbc235 / DPDK 24.03.0 initialization... 00:06:29.511 [2024-07-24 22:45:27.522602] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid470009 ] 00:06:29.511 EAL: No free 2048 kB hugepages reported on node 1 00:06:29.511 [2024-07-24 22:45:27.595070] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:29.511 [2024-07-24 22:45:27.670452] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:29.511 [2024-07-24 22:45:27.670554] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:29.511 [2024-07-24 22:45:27.670658] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.511 [2024-07-24 22:45:27.670659] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:30.888 Running I/O for 1 seconds... 00:06:30.888 lcore 0: 181396 00:06:30.888 lcore 1: 181392 00:06:30.888 lcore 2: 181392 00:06:30.888 lcore 3: 181395 00:06:30.888 done. 00:06:30.888 00:06:30.888 real 0m1.230s 00:06:30.888 user 0m4.142s 00:06:30.888 sys 0m0.085s 00:06:30.888 22:45:28 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:30.888 22:45:28 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:30.888 ************************************ 00:06:30.888 END TEST event_perf 00:06:30.888 ************************************ 00:06:30.888 22:45:28 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:30.888 22:45:28 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:06:30.888 22:45:28 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:30.888 22:45:28 event -- common/autotest_common.sh@10 -- # set +x 00:06:30.888 ************************************ 00:06:30.888 START TEST event_reactor 00:06:30.888 ************************************ 00:06:30.888 22:45:28 event.event_reactor -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:30.888 [2024-07-24 22:45:28.820634] Starting SPDK v24.09-pre git sha1 f41dbc235 / DPDK 24.03.0 initialization... 00:06:30.888 [2024-07-24 22:45:28.820706] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid470246 ] 00:06:30.888 EAL: No free 2048 kB hugepages reported on node 1 00:06:30.888 [2024-07-24 22:45:28.892451] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.888 [2024-07-24 22:45:28.964183] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.824 test_start 00:06:31.825 oneshot 00:06:31.825 tick 100 00:06:31.825 tick 100 00:06:31.825 tick 250 00:06:31.825 tick 100 00:06:31.825 tick 100 00:06:31.825 tick 100 00:06:31.825 tick 250 00:06:31.825 tick 500 00:06:31.825 tick 100 00:06:31.825 tick 100 00:06:31.825 tick 250 00:06:31.825 tick 100 00:06:31.825 tick 100 00:06:31.825 test_end 00:06:31.825 00:06:31.825 real 0m1.225s 00:06:31.825 user 0m1.139s 00:06:31.825 sys 0m0.082s 00:06:32.084 22:45:30 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:32.084 22:45:30 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:32.084 ************************************ 00:06:32.084 END TEST event_reactor 00:06:32.084 ************************************ 00:06:32.084 22:45:30 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:32.084 22:45:30 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:06:32.084 22:45:30 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:32.084 22:45:30 event -- common/autotest_common.sh@10 -- # set +x 00:06:32.084 ************************************ 00:06:32.084 START TEST event_reactor_perf 00:06:32.084 ************************************ 00:06:32.084 22:45:30 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:32.084 [2024-07-24 22:45:30.113272] Starting SPDK v24.09-pre git sha1 f41dbc235 / DPDK 24.03.0 initialization... 00:06:32.084 [2024-07-24 22:45:30.113346] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid470477 ] 00:06:32.084 EAL: No free 2048 kB hugepages reported on node 1 00:06:32.084 [2024-07-24 22:45:30.184722] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.084 [2024-07-24 22:45:30.256902] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.462 test_start 00:06:33.462 test_end 00:06:33.462 Performance: 921740 events per second 00:06:33.462 00:06:33.462 real 0m1.222s 00:06:33.462 user 0m1.131s 00:06:33.462 sys 0m0.086s 00:06:33.462 22:45:31 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:33.462 22:45:31 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:33.462 ************************************ 00:06:33.462 END TEST event_reactor_perf 00:06:33.462 ************************************ 00:06:33.462 22:45:31 event -- event/event.sh@49 -- # uname -s 00:06:33.462 22:45:31 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:33.462 22:45:31 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:33.462 22:45:31 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:33.462 22:45:31 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:33.462 22:45:31 event -- common/autotest_common.sh@10 -- # set +x 00:06:33.462 ************************************ 00:06:33.462 START TEST event_scheduler 00:06:33.462 ************************************ 00:06:33.462 22:45:31 event.event_scheduler -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:33.462 * Looking for test storage... 00:06:33.462 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler 00:06:33.462 22:45:31 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:33.462 22:45:31 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=470741 00:06:33.462 22:45:31 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:33.462 22:45:31 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:33.462 22:45:31 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 470741 00:06:33.462 22:45:31 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 470741 ']' 00:06:33.462 22:45:31 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:33.462 22:45:31 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:33.462 22:45:31 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:33.462 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:33.462 22:45:31 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:33.462 22:45:31 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:33.462 [2024-07-24 22:45:31.504400] Starting SPDK v24.09-pre git sha1 f41dbc235 / DPDK 24.03.0 initialization... 00:06:33.462 [2024-07-24 22:45:31.504470] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid470741 ] 00:06:33.462 EAL: No free 2048 kB hugepages reported on node 1 00:06:33.462 [2024-07-24 22:45:31.577595] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:33.462 [2024-07-24 22:45:31.655551] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.462 [2024-07-24 22:45:31.655658] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:33.462 [2024-07-24 22:45:31.655738] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:33.462 [2024-07-24 22:45:31.655739] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:34.398 22:45:32 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:34.398 22:45:32 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:06:34.398 22:45:32 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:34.398 22:45:32 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.398 22:45:32 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:34.398 [2024-07-24 22:45:32.322115] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:06:34.398 [2024-07-24 22:45:32.322135] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:06:34.398 [2024-07-24 22:45:32.322144] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:34.398 [2024-07-24 22:45:32.322149] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:34.398 [2024-07-24 22:45:32.322155] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:34.398 22:45:32 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.398 22:45:32 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:34.398 22:45:32 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.398 22:45:32 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:34.398 [2024-07-24 22:45:32.390983] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:34.398 22:45:32 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.398 22:45:32 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:34.398 22:45:32 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:34.398 22:45:32 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:34.398 22:45:32 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:34.398 ************************************ 00:06:34.398 START TEST scheduler_create_thread 00:06:34.398 ************************************ 00:06:34.398 22:45:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:06:34.398 22:45:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:34.398 22:45:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.398 22:45:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:34.398 2 00:06:34.398 22:45:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.398 22:45:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:34.398 22:45:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.398 22:45:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:34.398 3 00:06:34.398 22:45:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.398 22:45:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:34.398 22:45:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.398 22:45:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:34.398 4 00:06:34.398 22:45:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.398 22:45:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:34.398 22:45:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.398 22:45:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:34.398 5 00:06:34.398 22:45:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.398 22:45:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:34.398 22:45:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.398 22:45:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:34.398 6 00:06:34.398 22:45:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.398 22:45:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:34.398 22:45:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.398 22:45:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:34.398 7 00:06:34.398 22:45:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.398 22:45:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:34.398 22:45:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.398 22:45:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:34.398 8 00:06:34.398 22:45:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.398 22:45:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:34.398 22:45:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.398 22:45:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:34.398 9 00:06:34.398 22:45:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.398 22:45:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:34.398 22:45:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.398 22:45:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:34.398 10 00:06:34.398 22:45:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.398 22:45:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:34.398 22:45:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.398 22:45:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:34.398 22:45:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.398 22:45:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:34.398 22:45:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:34.398 22:45:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.398 22:45:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:34.963 22:45:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.963 22:45:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:34.963 22:45:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.963 22:45:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:36.338 22:45:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:36.338 22:45:34 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:36.338 22:45:34 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:36.338 22:45:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:36.338 22:45:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:37.713 22:45:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:37.713 00:06:37.713 real 0m3.102s 00:06:37.713 user 0m0.023s 00:06:37.713 sys 0m0.006s 00:06:37.713 22:45:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:37.713 22:45:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:37.713 ************************************ 00:06:37.713 END TEST scheduler_create_thread 00:06:37.713 ************************************ 00:06:37.713 22:45:35 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:37.713 22:45:35 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 470741 00:06:37.713 22:45:35 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 470741 ']' 00:06:37.713 22:45:35 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 470741 00:06:37.713 22:45:35 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:06:37.713 22:45:35 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:37.713 22:45:35 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 470741 00:06:37.713 22:45:35 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:06:37.713 22:45:35 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:06:37.713 22:45:35 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 470741' 00:06:37.713 killing process with pid 470741 00:06:37.713 22:45:35 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 470741 00:06:37.713 22:45:35 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 470741 00:06:37.713 [2024-07-24 22:45:35.905955] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:37.972 00:06:37.972 real 0m4.714s 00:06:37.972 user 0m9.153s 00:06:37.972 sys 0m0.372s 00:06:37.972 22:45:36 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:37.972 22:45:36 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:37.972 ************************************ 00:06:37.972 END TEST event_scheduler 00:06:37.972 ************************************ 00:06:37.972 22:45:36 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:37.972 22:45:36 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:37.972 22:45:36 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:37.972 22:45:36 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:37.972 22:45:36 event -- common/autotest_common.sh@10 -- # set +x 00:06:37.972 ************************************ 00:06:37.972 START TEST app_repeat 00:06:37.972 ************************************ 00:06:37.972 22:45:36 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:06:37.972 22:45:36 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:38.230 22:45:36 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:38.230 22:45:36 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:38.230 22:45:36 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:38.230 22:45:36 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:38.230 22:45:36 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:38.230 22:45:36 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:38.231 22:45:36 event.app_repeat -- event/event.sh@19 -- # repeat_pid=471633 00:06:38.231 22:45:36 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:38.231 22:45:36 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:38.231 22:45:36 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 471633' 00:06:38.231 Process app_repeat pid: 471633 00:06:38.231 22:45:36 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:38.231 22:45:36 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:38.231 spdk_app_start Round 0 00:06:38.231 22:45:36 event.app_repeat -- event/event.sh@25 -- # waitforlisten 471633 /var/tmp/spdk-nbd.sock 00:06:38.231 22:45:36 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 471633 ']' 00:06:38.231 22:45:36 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:38.231 22:45:36 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:38.231 22:45:36 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:38.231 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:38.231 22:45:36 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:38.231 22:45:36 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:38.231 [2024-07-24 22:45:36.201856] Starting SPDK v24.09-pre git sha1 f41dbc235 / DPDK 24.03.0 initialization... 00:06:38.231 [2024-07-24 22:45:36.201923] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid471633 ] 00:06:38.231 EAL: No free 2048 kB hugepages reported on node 1 00:06:38.231 [2024-07-24 22:45:36.275541] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:38.231 [2024-07-24 22:45:36.357050] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:38.231 [2024-07-24 22:45:36.357050] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.166 22:45:37 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:39.166 22:45:37 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:39.166 22:45:37 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:39.166 Malloc0 00:06:39.166 22:45:37 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:39.425 Malloc1 00:06:39.425 22:45:37 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:39.425 22:45:37 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:39.425 22:45:37 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:39.425 22:45:37 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:39.425 22:45:37 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:39.425 22:45:37 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:39.425 22:45:37 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:39.425 22:45:37 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:39.425 22:45:37 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:39.425 22:45:37 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:39.425 22:45:37 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:39.425 22:45:37 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:39.425 22:45:37 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:39.425 22:45:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:39.425 22:45:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:39.425 22:45:37 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:39.425 /dev/nbd0 00:06:39.425 22:45:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:39.425 22:45:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:39.425 22:45:37 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:39.425 22:45:37 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:39.425 22:45:37 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:39.425 22:45:37 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:39.425 22:45:37 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:39.425 22:45:37 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:39.425 22:45:37 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:39.425 22:45:37 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:39.425 22:45:37 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:39.425 1+0 records in 00:06:39.425 1+0 records out 00:06:39.425 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000198064 s, 20.7 MB/s 00:06:39.425 22:45:37 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:39.425 22:45:37 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:39.425 22:45:37 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:39.425 22:45:37 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:39.425 22:45:37 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:39.425 22:45:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:39.425 22:45:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:39.425 22:45:37 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:39.683 /dev/nbd1 00:06:39.683 22:45:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:39.683 22:45:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:39.683 22:45:37 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:39.683 22:45:37 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:39.683 22:45:37 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:39.683 22:45:37 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:39.683 22:45:37 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:39.683 22:45:37 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:39.683 22:45:37 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:39.683 22:45:37 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:39.683 22:45:37 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:39.683 1+0 records in 00:06:39.683 1+0 records out 00:06:39.683 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000219929 s, 18.6 MB/s 00:06:39.683 22:45:37 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:39.683 22:45:37 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:39.683 22:45:37 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:39.683 22:45:37 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:39.683 22:45:37 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:39.683 22:45:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:39.683 22:45:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:39.683 22:45:37 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:39.683 22:45:37 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:39.683 22:45:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:39.942 22:45:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:39.942 { 00:06:39.942 "nbd_device": "/dev/nbd0", 00:06:39.942 "bdev_name": "Malloc0" 00:06:39.942 }, 00:06:39.942 { 00:06:39.942 "nbd_device": "/dev/nbd1", 00:06:39.942 "bdev_name": "Malloc1" 00:06:39.942 } 00:06:39.942 ]' 00:06:39.942 22:45:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:39.942 { 00:06:39.942 "nbd_device": "/dev/nbd0", 00:06:39.942 "bdev_name": "Malloc0" 00:06:39.942 }, 00:06:39.942 { 00:06:39.942 "nbd_device": "/dev/nbd1", 00:06:39.942 "bdev_name": "Malloc1" 00:06:39.942 } 00:06:39.942 ]' 00:06:39.942 22:45:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:39.942 22:45:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:39.942 /dev/nbd1' 00:06:39.942 22:45:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:39.942 /dev/nbd1' 00:06:39.942 22:45:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:39.942 22:45:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:39.942 22:45:38 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:39.942 22:45:38 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:39.942 22:45:38 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:39.942 22:45:38 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:39.942 22:45:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:39.942 22:45:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:39.942 22:45:38 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:39.942 22:45:38 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:06:39.942 22:45:38 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:39.942 22:45:38 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:39.942 256+0 records in 00:06:39.942 256+0 records out 00:06:39.942 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0103142 s, 102 MB/s 00:06:39.942 22:45:38 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:39.942 22:45:38 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:39.942 256+0 records in 00:06:39.942 256+0 records out 00:06:39.942 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0143997 s, 72.8 MB/s 00:06:39.942 22:45:38 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:39.942 22:45:38 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:39.942 256+0 records in 00:06:39.942 256+0 records out 00:06:39.942 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0162228 s, 64.6 MB/s 00:06:39.942 22:45:38 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:39.942 22:45:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:39.942 22:45:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:39.942 22:45:38 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:39.942 22:45:38 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:06:39.942 22:45:38 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:39.942 22:45:38 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:39.942 22:45:38 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:39.942 22:45:38 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:39.942 22:45:38 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:39.942 22:45:38 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:39.942 22:45:38 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:06:39.942 22:45:38 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:39.942 22:45:38 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:39.942 22:45:38 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:39.942 22:45:38 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:39.942 22:45:38 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:39.942 22:45:38 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:39.942 22:45:38 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:40.200 22:45:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:40.200 22:45:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:40.200 22:45:38 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:40.200 22:45:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:40.200 22:45:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:40.200 22:45:38 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:40.200 22:45:38 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:40.200 22:45:38 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:40.200 22:45:38 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:40.200 22:45:38 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:40.458 22:45:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:40.458 22:45:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:40.458 22:45:38 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:40.458 22:45:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:40.458 22:45:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:40.458 22:45:38 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:40.458 22:45:38 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:40.458 22:45:38 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:40.458 22:45:38 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:40.458 22:45:38 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:40.458 22:45:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:40.716 22:45:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:40.716 22:45:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:40.716 22:45:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:40.716 22:45:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:40.716 22:45:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:40.716 22:45:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:40.716 22:45:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:40.716 22:45:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:40.716 22:45:38 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:40.716 22:45:38 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:40.716 22:45:38 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:40.716 22:45:38 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:40.716 22:45:38 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:40.974 22:45:38 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:40.974 [2024-07-24 22:45:39.113089] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:41.233 [2024-07-24 22:45:39.183820] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.233 [2024-07-24 22:45:39.183820] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:41.233 [2024-07-24 22:45:39.223584] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:41.233 [2024-07-24 22:45:39.223625] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:43.766 22:45:41 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:43.766 22:45:41 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:43.766 spdk_app_start Round 1 00:06:43.766 22:45:41 event.app_repeat -- event/event.sh@25 -- # waitforlisten 471633 /var/tmp/spdk-nbd.sock 00:06:43.766 22:45:41 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 471633 ']' 00:06:43.766 22:45:41 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:43.766 22:45:41 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:43.766 22:45:41 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:43.766 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:43.766 22:45:41 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:43.766 22:45:41 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:44.024 22:45:42 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:44.024 22:45:42 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:44.024 22:45:42 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:44.283 Malloc0 00:06:44.283 22:45:42 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:44.283 Malloc1 00:06:44.541 22:45:42 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:44.541 22:45:42 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:44.541 22:45:42 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:44.541 22:45:42 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:44.541 22:45:42 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:44.541 22:45:42 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:44.541 22:45:42 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:44.541 22:45:42 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:44.541 22:45:42 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:44.541 22:45:42 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:44.541 22:45:42 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:44.541 22:45:42 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:44.541 22:45:42 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:44.541 22:45:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:44.541 22:45:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:44.541 22:45:42 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:44.541 /dev/nbd0 00:06:44.541 22:45:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:44.541 22:45:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:44.541 22:45:42 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:44.541 22:45:42 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:44.541 22:45:42 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:44.541 22:45:42 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:44.541 22:45:42 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:44.541 22:45:42 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:44.541 22:45:42 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:44.541 22:45:42 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:44.541 22:45:42 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:44.541 1+0 records in 00:06:44.541 1+0 records out 00:06:44.541 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000144584 s, 28.3 MB/s 00:06:44.541 22:45:42 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:44.541 22:45:42 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:44.541 22:45:42 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:44.541 22:45:42 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:44.541 22:45:42 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:44.541 22:45:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:44.541 22:45:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:44.541 22:45:42 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:44.800 /dev/nbd1 00:06:44.800 22:45:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:44.800 22:45:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:44.800 22:45:42 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:44.800 22:45:42 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:44.800 22:45:42 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:44.800 22:45:42 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:44.800 22:45:42 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:44.800 22:45:42 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:44.800 22:45:42 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:44.800 22:45:42 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:44.800 22:45:42 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:44.800 1+0 records in 00:06:44.800 1+0 records out 00:06:44.800 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000228794 s, 17.9 MB/s 00:06:44.800 22:45:42 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:44.800 22:45:42 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:44.800 22:45:42 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:44.800 22:45:42 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:44.800 22:45:42 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:44.800 22:45:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:44.800 22:45:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:44.800 22:45:42 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:44.800 22:45:42 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:44.800 22:45:42 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:45.058 22:45:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:45.058 { 00:06:45.058 "nbd_device": "/dev/nbd0", 00:06:45.058 "bdev_name": "Malloc0" 00:06:45.058 }, 00:06:45.058 { 00:06:45.058 "nbd_device": "/dev/nbd1", 00:06:45.058 "bdev_name": "Malloc1" 00:06:45.058 } 00:06:45.058 ]' 00:06:45.058 22:45:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:45.058 { 00:06:45.058 "nbd_device": "/dev/nbd0", 00:06:45.058 "bdev_name": "Malloc0" 00:06:45.058 }, 00:06:45.058 { 00:06:45.058 "nbd_device": "/dev/nbd1", 00:06:45.058 "bdev_name": "Malloc1" 00:06:45.058 } 00:06:45.058 ]' 00:06:45.058 22:45:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:45.058 22:45:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:45.058 /dev/nbd1' 00:06:45.058 22:45:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:45.058 22:45:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:45.058 /dev/nbd1' 00:06:45.058 22:45:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:45.058 22:45:43 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:45.058 22:45:43 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:45.058 22:45:43 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:45.058 22:45:43 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:45.058 22:45:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:45.058 22:45:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:45.058 22:45:43 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:45.058 22:45:43 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:06:45.058 22:45:43 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:45.058 22:45:43 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:45.058 256+0 records in 00:06:45.058 256+0 records out 00:06:45.058 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00977116 s, 107 MB/s 00:06:45.058 22:45:43 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:45.058 22:45:43 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:45.058 256+0 records in 00:06:45.058 256+0 records out 00:06:45.058 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0145897 s, 71.9 MB/s 00:06:45.058 22:45:43 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:45.058 22:45:43 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:45.058 256+0 records in 00:06:45.058 256+0 records out 00:06:45.058 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.016118 s, 65.1 MB/s 00:06:45.058 22:45:43 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:45.058 22:45:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:45.058 22:45:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:45.058 22:45:43 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:45.058 22:45:43 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:06:45.058 22:45:43 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:45.058 22:45:43 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:45.058 22:45:43 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:45.058 22:45:43 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:45.059 22:45:43 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:45.059 22:45:43 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:45.059 22:45:43 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:06:45.059 22:45:43 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:45.059 22:45:43 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:45.059 22:45:43 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:45.059 22:45:43 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:45.059 22:45:43 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:45.059 22:45:43 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:45.059 22:45:43 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:45.316 22:45:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:45.316 22:45:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:45.316 22:45:43 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:45.316 22:45:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:45.316 22:45:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:45.316 22:45:43 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:45.316 22:45:43 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:45.316 22:45:43 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:45.316 22:45:43 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:45.316 22:45:43 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:45.574 22:45:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:45.574 22:45:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:45.574 22:45:43 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:45.574 22:45:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:45.574 22:45:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:45.574 22:45:43 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:45.574 22:45:43 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:45.574 22:45:43 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:45.574 22:45:43 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:45.574 22:45:43 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:45.574 22:45:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:45.833 22:45:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:45.833 22:45:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:45.833 22:45:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:45.833 22:45:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:45.833 22:45:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:45.833 22:45:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:45.833 22:45:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:45.833 22:45:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:45.833 22:45:43 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:45.833 22:45:43 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:45.833 22:45:43 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:45.833 22:45:43 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:45.833 22:45:43 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:46.092 22:45:44 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:46.092 [2024-07-24 22:45:44.229405] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:46.351 [2024-07-24 22:45:44.299665] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.351 [2024-07-24 22:45:44.299665] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:46.351 [2024-07-24 22:45:44.340081] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:46.351 [2024-07-24 22:45:44.340123] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:48.883 22:45:47 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:48.883 22:45:47 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:48.883 spdk_app_start Round 2 00:06:48.883 22:45:47 event.app_repeat -- event/event.sh@25 -- # waitforlisten 471633 /var/tmp/spdk-nbd.sock 00:06:48.883 22:45:47 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 471633 ']' 00:06:48.883 22:45:47 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:48.883 22:45:47 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:48.883 22:45:47 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:48.883 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:48.883 22:45:47 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:48.883 22:45:47 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:49.141 22:45:47 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:49.141 22:45:47 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:49.141 22:45:47 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:49.401 Malloc0 00:06:49.401 22:45:47 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:49.401 Malloc1 00:06:49.401 22:45:47 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:49.401 22:45:47 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:49.401 22:45:47 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:49.401 22:45:47 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:49.401 22:45:47 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:49.401 22:45:47 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:49.401 22:45:47 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:49.401 22:45:47 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:49.401 22:45:47 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:49.401 22:45:47 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:49.401 22:45:47 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:49.401 22:45:47 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:49.401 22:45:47 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:49.401 22:45:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:49.401 22:45:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:49.401 22:45:47 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:49.660 /dev/nbd0 00:06:49.660 22:45:47 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:49.660 22:45:47 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:49.660 22:45:47 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:49.660 22:45:47 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:49.660 22:45:47 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:49.660 22:45:47 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:49.660 22:45:47 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:49.660 22:45:47 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:49.660 22:45:47 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:49.660 22:45:47 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:49.660 22:45:47 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:49.660 1+0 records in 00:06:49.660 1+0 records out 00:06:49.660 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000231749 s, 17.7 MB/s 00:06:49.660 22:45:47 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:49.660 22:45:47 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:49.660 22:45:47 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:49.660 22:45:47 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:49.660 22:45:47 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:49.660 22:45:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:49.660 22:45:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:49.660 22:45:47 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:49.919 /dev/nbd1 00:06:49.919 22:45:47 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:49.919 22:45:47 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:49.919 22:45:47 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:49.919 22:45:47 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:49.919 22:45:47 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:49.919 22:45:47 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:49.919 22:45:47 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:49.919 22:45:48 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:49.919 22:45:48 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:49.919 22:45:48 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:49.919 22:45:48 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:49.919 1+0 records in 00:06:49.919 1+0 records out 00:06:49.919 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000196047 s, 20.9 MB/s 00:06:49.919 22:45:48 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:49.919 22:45:48 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:49.919 22:45:48 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:49.919 22:45:48 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:49.919 22:45:48 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:49.919 22:45:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:49.919 22:45:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:49.920 22:45:48 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:49.920 22:45:48 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:49.920 22:45:48 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:50.179 22:45:48 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:50.179 { 00:06:50.179 "nbd_device": "/dev/nbd0", 00:06:50.179 "bdev_name": "Malloc0" 00:06:50.179 }, 00:06:50.179 { 00:06:50.179 "nbd_device": "/dev/nbd1", 00:06:50.179 "bdev_name": "Malloc1" 00:06:50.179 } 00:06:50.179 ]' 00:06:50.179 22:45:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:50.179 { 00:06:50.179 "nbd_device": "/dev/nbd0", 00:06:50.179 "bdev_name": "Malloc0" 00:06:50.179 }, 00:06:50.179 { 00:06:50.179 "nbd_device": "/dev/nbd1", 00:06:50.179 "bdev_name": "Malloc1" 00:06:50.179 } 00:06:50.179 ]' 00:06:50.179 22:45:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:50.179 22:45:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:50.179 /dev/nbd1' 00:06:50.179 22:45:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:50.179 /dev/nbd1' 00:06:50.179 22:45:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:50.179 22:45:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:50.179 22:45:48 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:50.179 22:45:48 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:50.179 22:45:48 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:50.179 22:45:48 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:50.179 22:45:48 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:50.179 22:45:48 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:50.179 22:45:48 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:50.179 22:45:48 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:06:50.179 22:45:48 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:50.179 22:45:48 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:50.179 256+0 records in 00:06:50.179 256+0 records out 00:06:50.179 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0103172 s, 102 MB/s 00:06:50.179 22:45:48 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:50.179 22:45:48 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:50.179 256+0 records in 00:06:50.179 256+0 records out 00:06:50.179 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0146453 s, 71.6 MB/s 00:06:50.179 22:45:48 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:50.179 22:45:48 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:50.179 256+0 records in 00:06:50.179 256+0 records out 00:06:50.179 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0160825 s, 65.2 MB/s 00:06:50.179 22:45:48 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:50.179 22:45:48 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:50.179 22:45:48 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:50.179 22:45:48 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:50.179 22:45:48 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:06:50.179 22:45:48 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:50.179 22:45:48 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:50.179 22:45:48 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:50.179 22:45:48 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:50.179 22:45:48 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:50.179 22:45:48 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:50.179 22:45:48 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:06:50.179 22:45:48 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:50.179 22:45:48 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:50.179 22:45:48 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:50.179 22:45:48 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:50.179 22:45:48 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:50.179 22:45:48 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:50.179 22:45:48 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:50.439 22:45:48 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:50.439 22:45:48 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:50.439 22:45:48 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:50.439 22:45:48 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:50.439 22:45:48 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:50.439 22:45:48 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:50.439 22:45:48 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:50.439 22:45:48 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:50.439 22:45:48 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:50.439 22:45:48 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:50.698 22:45:48 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:50.698 22:45:48 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:50.698 22:45:48 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:50.698 22:45:48 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:50.698 22:45:48 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:50.698 22:45:48 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:50.698 22:45:48 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:50.698 22:45:48 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:50.698 22:45:48 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:50.698 22:45:48 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:50.698 22:45:48 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:50.698 22:45:48 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:50.698 22:45:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:50.698 22:45:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:50.957 22:45:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:50.957 22:45:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:50.957 22:45:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:50.957 22:45:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:50.957 22:45:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:50.957 22:45:48 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:50.957 22:45:48 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:50.957 22:45:48 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:50.957 22:45:48 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:50.957 22:45:48 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:50.957 22:45:49 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:51.216 [2024-07-24 22:45:49.300491] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:51.216 [2024-07-24 22:45:49.370140] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:51.216 [2024-07-24 22:45:49.370140] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.216 [2024-07-24 22:45:49.409812] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:51.216 [2024-07-24 22:45:49.409855] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:54.507 22:45:52 event.app_repeat -- event/event.sh@38 -- # waitforlisten 471633 /var/tmp/spdk-nbd.sock 00:06:54.507 22:45:52 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 471633 ']' 00:06:54.507 22:45:52 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:54.507 22:45:52 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:54.507 22:45:52 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:54.507 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:54.507 22:45:52 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:54.507 22:45:52 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:54.507 22:45:52 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:54.507 22:45:52 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:54.507 22:45:52 event.app_repeat -- event/event.sh@39 -- # killprocess 471633 00:06:54.507 22:45:52 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 471633 ']' 00:06:54.507 22:45:52 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 471633 00:06:54.507 22:45:52 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:06:54.507 22:45:52 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:54.507 22:45:52 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 471633 00:06:54.507 22:45:52 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:54.507 22:45:52 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:54.507 22:45:52 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 471633' 00:06:54.507 killing process with pid 471633 00:06:54.507 22:45:52 event.app_repeat -- common/autotest_common.sh@969 -- # kill 471633 00:06:54.507 22:45:52 event.app_repeat -- common/autotest_common.sh@974 -- # wait 471633 00:06:54.507 spdk_app_start is called in Round 0. 00:06:54.507 Shutdown signal received, stop current app iteration 00:06:54.507 Starting SPDK v24.09-pre git sha1 f41dbc235 / DPDK 24.03.0 reinitialization... 00:06:54.507 spdk_app_start is called in Round 1. 00:06:54.507 Shutdown signal received, stop current app iteration 00:06:54.507 Starting SPDK v24.09-pre git sha1 f41dbc235 / DPDK 24.03.0 reinitialization... 00:06:54.507 spdk_app_start is called in Round 2. 00:06:54.507 Shutdown signal received, stop current app iteration 00:06:54.507 Starting SPDK v24.09-pre git sha1 f41dbc235 / DPDK 24.03.0 reinitialization... 00:06:54.507 spdk_app_start is called in Round 3. 00:06:54.507 Shutdown signal received, stop current app iteration 00:06:54.507 22:45:52 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:54.507 22:45:52 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:54.507 00:06:54.507 real 0m16.347s 00:06:54.507 user 0m35.243s 00:06:54.507 sys 0m2.594s 00:06:54.507 22:45:52 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:54.507 22:45:52 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:54.507 ************************************ 00:06:54.507 END TEST app_repeat 00:06:54.507 ************************************ 00:06:54.507 22:45:52 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:54.507 22:45:52 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:54.507 22:45:52 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:54.507 22:45:52 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:54.507 22:45:52 event -- common/autotest_common.sh@10 -- # set +x 00:06:54.507 ************************************ 00:06:54.507 START TEST cpu_locks 00:06:54.507 ************************************ 00:06:54.507 22:45:52 event.cpu_locks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:54.507 * Looking for test storage... 00:06:54.507 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event 00:06:54.507 22:45:52 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:54.507 22:45:52 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:54.508 22:45:52 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:54.508 22:45:52 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:54.508 22:45:52 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:54.508 22:45:52 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:54.508 22:45:52 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:54.508 ************************************ 00:06:54.508 START TEST default_locks 00:06:54.508 ************************************ 00:06:54.508 22:45:52 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:06:54.508 22:45:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=474448 00:06:54.508 22:45:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 474448 00:06:54.508 22:45:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:54.508 22:45:52 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 474448 ']' 00:06:54.508 22:45:52 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:54.508 22:45:52 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:54.508 22:45:52 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:54.508 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:54.508 22:45:52 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:54.508 22:45:52 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:54.768 [2024-07-24 22:45:52.727226] Starting SPDK v24.09-pre git sha1 f41dbc235 / DPDK 24.03.0 initialization... 00:06:54.768 [2024-07-24 22:45:52.727286] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid474448 ] 00:06:54.768 EAL: No free 2048 kB hugepages reported on node 1 00:06:54.768 [2024-07-24 22:45:52.795114] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.768 [2024-07-24 22:45:52.875164] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.706 22:45:53 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:55.706 22:45:53 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:06:55.706 22:45:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 474448 00:06:55.706 22:45:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 474448 00:06:55.706 22:45:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:55.706 lslocks: write error 00:06:55.706 22:45:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 474448 00:06:55.706 22:45:53 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 474448 ']' 00:06:55.706 22:45:53 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 474448 00:06:55.706 22:45:53 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:06:55.706 22:45:53 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:55.706 22:45:53 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 474448 00:06:55.706 22:45:53 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:55.706 22:45:53 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:55.707 22:45:53 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 474448' 00:06:55.707 killing process with pid 474448 00:06:55.707 22:45:53 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 474448 00:06:55.707 22:45:53 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 474448 00:06:55.966 22:45:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 474448 00:06:55.966 22:45:54 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:06:55.966 22:45:54 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 474448 00:06:55.966 22:45:54 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:55.966 22:45:54 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:55.966 22:45:54 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:55.966 22:45:54 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:55.966 22:45:54 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 474448 00:06:55.966 22:45:54 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 474448 ']' 00:06:55.966 22:45:54 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:55.966 22:45:54 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:55.966 22:45:54 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:55.966 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:55.966 22:45:54 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:55.966 22:45:54 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:55.966 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (474448) - No such process 00:06:55.966 ERROR: process (pid: 474448) is no longer running 00:06:55.966 22:45:54 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:55.966 22:45:54 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:06:55.966 22:45:54 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:06:55.966 22:45:54 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:55.967 22:45:54 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:55.967 22:45:54 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:55.967 22:45:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:55.967 22:45:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:55.967 22:45:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:55.967 22:45:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:55.967 00:06:55.967 real 0m1.417s 00:06:55.967 user 0m1.488s 00:06:55.967 sys 0m0.456s 00:06:55.967 22:45:54 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:55.967 22:45:54 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:55.967 ************************************ 00:06:55.967 END TEST default_locks 00:06:55.967 ************************************ 00:06:55.967 22:45:54 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:55.967 22:45:54 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:55.967 22:45:54 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:55.967 22:45:54 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:56.226 ************************************ 00:06:56.226 START TEST default_locks_via_rpc 00:06:56.226 ************************************ 00:06:56.226 22:45:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:06:56.226 22:45:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=474690 00:06:56.226 22:45:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 474690 00:06:56.226 22:45:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:56.226 22:45:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 474690 ']' 00:06:56.226 22:45:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:56.226 22:45:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:56.226 22:45:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:56.226 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:56.226 22:45:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:56.226 22:45:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:56.226 [2024-07-24 22:45:54.211831] Starting SPDK v24.09-pre git sha1 f41dbc235 / DPDK 24.03.0 initialization... 00:06:56.226 [2024-07-24 22:45:54.211900] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid474690 ] 00:06:56.227 EAL: No free 2048 kB hugepages reported on node 1 00:06:56.227 [2024-07-24 22:45:54.266326] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.227 [2024-07-24 22:45:54.339071] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.486 22:45:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:56.486 22:45:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:56.486 22:45:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:56.486 22:45:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:56.486 22:45:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:56.486 22:45:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:56.486 22:45:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:56.486 22:45:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:56.486 22:45:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:56.486 22:45:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:56.486 22:45:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:56.486 22:45:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:56.486 22:45:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:56.486 22:45:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:56.486 22:45:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 474690 00:06:56.486 22:45:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 474690 00:06:56.486 22:45:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:56.745 22:45:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 474690 00:06:56.745 22:45:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 474690 ']' 00:06:56.745 22:45:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 474690 00:06:56.745 22:45:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:06:56.745 22:45:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:56.745 22:45:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 474690 00:06:56.745 22:45:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:56.745 22:45:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:56.745 22:45:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 474690' 00:06:56.745 killing process with pid 474690 00:06:56.745 22:45:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 474690 00:06:56.745 22:45:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 474690 00:06:57.004 00:06:57.004 real 0m0.866s 00:06:57.005 user 0m0.846s 00:06:57.005 sys 0m0.380s 00:06:57.005 22:45:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:57.005 22:45:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:57.005 ************************************ 00:06:57.005 END TEST default_locks_via_rpc 00:06:57.005 ************************************ 00:06:57.005 22:45:55 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:57.005 22:45:55 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:57.005 22:45:55 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:57.005 22:45:55 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:57.005 ************************************ 00:06:57.005 START TEST non_locking_app_on_locked_coremask 00:06:57.005 ************************************ 00:06:57.005 22:45:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:06:57.005 22:45:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=474930 00:06:57.005 22:45:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 474930 /var/tmp/spdk.sock 00:06:57.005 22:45:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:57.005 22:45:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 474930 ']' 00:06:57.005 22:45:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:57.005 22:45:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:57.005 22:45:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:57.005 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:57.005 22:45:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:57.005 22:45:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:57.005 [2024-07-24 22:45:55.147093] Starting SPDK v24.09-pre git sha1 f41dbc235 / DPDK 24.03.0 initialization... 00:06:57.005 [2024-07-24 22:45:55.147172] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid474930 ] 00:06:57.005 EAL: No free 2048 kB hugepages reported on node 1 00:06:57.263 [2024-07-24 22:45:55.218897] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.263 [2024-07-24 22:45:55.298985] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.831 22:45:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:57.831 22:45:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:57.831 22:45:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:57.831 22:45:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=474968 00:06:57.831 22:45:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 474968 /var/tmp/spdk2.sock 00:06:57.831 22:45:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 474968 ']' 00:06:57.831 22:45:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:57.831 22:45:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:57.831 22:45:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:57.831 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:57.831 22:45:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:57.831 22:45:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:57.831 [2024-07-24 22:45:55.973536] Starting SPDK v24.09-pre git sha1 f41dbc235 / DPDK 24.03.0 initialization... 00:06:57.831 [2024-07-24 22:45:55.973579] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid474968 ] 00:06:57.831 EAL: No free 2048 kB hugepages reported on node 1 00:06:58.090 [2024-07-24 22:45:56.041916] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:58.090 [2024-07-24 22:45:56.041943] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.090 [2024-07-24 22:45:56.202789] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.658 22:45:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:58.658 22:45:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:58.658 22:45:56 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 474930 00:06:58.658 22:45:56 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 474930 00:06:58.658 22:45:56 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:58.916 lslocks: write error 00:06:58.916 22:45:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 474930 00:06:58.916 22:45:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 474930 ']' 00:06:58.916 22:45:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 474930 00:06:58.916 22:45:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:58.916 22:45:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:58.916 22:45:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 474930 00:06:59.175 22:45:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:59.175 22:45:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:59.175 22:45:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 474930' 00:06:59.175 killing process with pid 474930 00:06:59.175 22:45:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 474930 00:06:59.175 22:45:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 474930 00:06:59.743 22:45:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 474968 00:06:59.743 22:45:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 474968 ']' 00:06:59.743 22:45:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 474968 00:06:59.743 22:45:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:59.743 22:45:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:59.743 22:45:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 474968 00:06:59.743 22:45:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:59.743 22:45:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:59.743 22:45:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 474968' 00:06:59.743 killing process with pid 474968 00:06:59.743 22:45:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 474968 00:06:59.743 22:45:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 474968 00:07:00.003 00:07:00.003 real 0m2.932s 00:07:00.003 user 0m3.087s 00:07:00.003 sys 0m0.835s 00:07:00.003 22:45:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:00.003 22:45:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:00.003 ************************************ 00:07:00.003 END TEST non_locking_app_on_locked_coremask 00:07:00.003 ************************************ 00:07:00.003 22:45:58 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:07:00.003 22:45:58 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:00.003 22:45:58 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:00.003 22:45:58 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:00.003 ************************************ 00:07:00.003 START TEST locking_app_on_unlocked_coremask 00:07:00.003 ************************************ 00:07:00.003 22:45:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:07:00.003 22:45:58 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=475400 00:07:00.003 22:45:58 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 475400 /var/tmp/spdk.sock 00:07:00.003 22:45:58 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:07:00.003 22:45:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 475400 ']' 00:07:00.003 22:45:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:00.003 22:45:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:00.003 22:45:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:00.003 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:00.003 22:45:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:00.003 22:45:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:00.003 [2024-07-24 22:45:58.146745] Starting SPDK v24.09-pre git sha1 f41dbc235 / DPDK 24.03.0 initialization... 00:07:00.003 [2024-07-24 22:45:58.146806] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid475400 ] 00:07:00.003 EAL: No free 2048 kB hugepages reported on node 1 00:07:00.262 [2024-07-24 22:45:58.219147] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:00.262 [2024-07-24 22:45:58.219172] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.262 [2024-07-24 22:45:58.299450] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.830 22:45:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:00.830 22:45:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:00.830 22:45:58 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=475611 00:07:00.830 22:45:58 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 475611 /var/tmp/spdk2.sock 00:07:00.830 22:45:58 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:00.830 22:45:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 475611 ']' 00:07:00.830 22:45:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:00.830 22:45:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:00.830 22:45:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:00.830 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:00.830 22:45:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:00.830 22:45:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:00.830 [2024-07-24 22:45:58.988722] Starting SPDK v24.09-pre git sha1 f41dbc235 / DPDK 24.03.0 initialization... 00:07:00.830 [2024-07-24 22:45:58.988800] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid475611 ] 00:07:00.830 EAL: No free 2048 kB hugepages reported on node 1 00:07:01.089 [2024-07-24 22:45:59.064239] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.089 [2024-07-24 22:45:59.213160] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.657 22:45:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:01.657 22:45:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:01.657 22:45:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 475611 00:07:01.657 22:45:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 475611 00:07:01.657 22:45:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:01.916 lslocks: write error 00:07:01.916 22:45:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 475400 00:07:01.916 22:46:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 475400 ']' 00:07:01.916 22:46:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 475400 00:07:01.916 22:46:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:01.916 22:46:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:01.916 22:46:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 475400 00:07:01.916 22:46:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:01.916 22:46:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:01.916 22:46:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 475400' 00:07:01.916 killing process with pid 475400 00:07:01.916 22:46:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 475400 00:07:01.916 22:46:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 475400 00:07:02.484 22:46:00 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 475611 00:07:02.484 22:46:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 475611 ']' 00:07:02.484 22:46:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 475611 00:07:02.484 22:46:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:02.484 22:46:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:02.484 22:46:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 475611 00:07:02.484 22:46:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:02.484 22:46:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:02.484 22:46:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 475611' 00:07:02.484 killing process with pid 475611 00:07:02.484 22:46:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 475611 00:07:02.484 22:46:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 475611 00:07:03.052 00:07:03.052 real 0m2.848s 00:07:03.052 user 0m3.025s 00:07:03.052 sys 0m0.786s 00:07:03.052 22:46:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:03.052 22:46:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:03.052 ************************************ 00:07:03.052 END TEST locking_app_on_unlocked_coremask 00:07:03.052 ************************************ 00:07:03.052 22:46:01 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:07:03.052 22:46:01 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:03.052 22:46:01 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:03.052 22:46:01 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:03.052 ************************************ 00:07:03.052 START TEST locking_app_on_locked_coremask 00:07:03.052 ************************************ 00:07:03.052 22:46:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:07:03.052 22:46:01 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=475871 00:07:03.052 22:46:01 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 475871 /var/tmp/spdk.sock 00:07:03.052 22:46:01 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:03.052 22:46:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 475871 ']' 00:07:03.052 22:46:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:03.052 22:46:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:03.052 22:46:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:03.052 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:03.052 22:46:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:03.052 22:46:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:03.052 [2024-07-24 22:46:01.063158] Starting SPDK v24.09-pre git sha1 f41dbc235 / DPDK 24.03.0 initialization... 00:07:03.052 [2024-07-24 22:46:01.063225] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid475871 ] 00:07:03.052 EAL: No free 2048 kB hugepages reported on node 1 00:07:03.052 [2024-07-24 22:46:01.130754] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.052 [2024-07-24 22:46:01.210635] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.990 22:46:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:03.990 22:46:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:03.990 22:46:01 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=476081 00:07:03.990 22:46:01 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 476081 /var/tmp/spdk2.sock 00:07:03.990 22:46:01 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:03.990 22:46:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:07:03.990 22:46:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 476081 /var/tmp/spdk2.sock 00:07:03.990 22:46:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:07:03.990 22:46:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:03.990 22:46:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:07:03.990 22:46:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:03.990 22:46:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 476081 /var/tmp/spdk2.sock 00:07:03.990 22:46:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 476081 ']' 00:07:03.990 22:46:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:03.990 22:46:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:03.990 22:46:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:03.990 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:03.990 22:46:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:03.990 22:46:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:03.990 [2024-07-24 22:46:01.907083] Starting SPDK v24.09-pre git sha1 f41dbc235 / DPDK 24.03.0 initialization... 00:07:03.990 [2024-07-24 22:46:01.907158] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid476081 ] 00:07:03.990 EAL: No free 2048 kB hugepages reported on node 1 00:07:03.990 [2024-07-24 22:46:01.980857] app.c: 772:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 475871 has claimed it. 00:07:03.990 [2024-07-24 22:46:01.980885] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:04.558 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (476081) - No such process 00:07:04.558 ERROR: process (pid: 476081) is no longer running 00:07:04.558 22:46:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:04.558 22:46:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:07:04.558 22:46:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:07:04.558 22:46:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:04.558 22:46:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:04.558 22:46:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:04.558 22:46:02 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 475871 00:07:04.558 22:46:02 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 475871 00:07:04.558 22:46:02 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:05.126 lslocks: write error 00:07:05.126 22:46:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 475871 00:07:05.126 22:46:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 475871 ']' 00:07:05.126 22:46:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 475871 00:07:05.126 22:46:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:05.126 22:46:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:05.126 22:46:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 475871 00:07:05.126 22:46:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:05.126 22:46:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:05.126 22:46:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 475871' 00:07:05.126 killing process with pid 475871 00:07:05.126 22:46:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 475871 00:07:05.126 22:46:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 475871 00:07:05.384 00:07:05.384 real 0m2.380s 00:07:05.384 user 0m2.599s 00:07:05.384 sys 0m0.648s 00:07:05.384 22:46:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:05.384 22:46:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:05.384 ************************************ 00:07:05.384 END TEST locking_app_on_locked_coremask 00:07:05.384 ************************************ 00:07:05.384 22:46:03 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:05.384 22:46:03 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:05.384 22:46:03 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:05.384 22:46:03 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:05.384 ************************************ 00:07:05.384 START TEST locking_overlapped_coremask 00:07:05.384 ************************************ 00:07:05.384 22:46:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:07:05.384 22:46:03 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=476330 00:07:05.384 22:46:03 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 476330 /var/tmp/spdk.sock 00:07:05.384 22:46:03 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:07:05.384 22:46:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 476330 ']' 00:07:05.384 22:46:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:05.384 22:46:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:05.384 22:46:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:05.384 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:05.384 22:46:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:05.384 22:46:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:05.384 [2024-07-24 22:46:03.508462] Starting SPDK v24.09-pre git sha1 f41dbc235 / DPDK 24.03.0 initialization... 00:07:05.384 [2024-07-24 22:46:03.508528] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid476330 ] 00:07:05.384 EAL: No free 2048 kB hugepages reported on node 1 00:07:05.384 [2024-07-24 22:46:03.577449] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:05.642 [2024-07-24 22:46:03.659383] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:05.642 [2024-07-24 22:46:03.659410] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.642 [2024-07-24 22:46:03.659410] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:06.210 22:46:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:06.210 22:46:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:06.210 22:46:04 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=476539 00:07:06.210 22:46:04 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 476539 /var/tmp/spdk2.sock 00:07:06.210 22:46:04 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:06.210 22:46:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:07:06.210 22:46:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 476539 /var/tmp/spdk2.sock 00:07:06.210 22:46:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:07:06.210 22:46:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:06.210 22:46:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:07:06.210 22:46:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:06.210 22:46:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 476539 /var/tmp/spdk2.sock 00:07:06.210 22:46:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 476539 ']' 00:07:06.210 22:46:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:06.210 22:46:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:06.210 22:46:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:06.210 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:06.210 22:46:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:06.210 22:46:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:06.210 [2024-07-24 22:46:04.374156] Starting SPDK v24.09-pre git sha1 f41dbc235 / DPDK 24.03.0 initialization... 00:07:06.210 [2024-07-24 22:46:04.374228] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid476539 ] 00:07:06.210 EAL: No free 2048 kB hugepages reported on node 1 00:07:06.469 [2024-07-24 22:46:04.454340] app.c: 772:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 476330 has claimed it. 00:07:06.469 [2024-07-24 22:46:04.454379] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:07.037 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (476539) - No such process 00:07:07.037 ERROR: process (pid: 476539) is no longer running 00:07:07.037 22:46:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:07.037 22:46:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:07:07.037 22:46:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:07:07.037 22:46:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:07.037 22:46:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:07.037 22:46:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:07.037 22:46:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:07.037 22:46:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:07.037 22:46:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:07.037 22:46:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:07.037 22:46:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 476330 00:07:07.037 22:46:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 476330 ']' 00:07:07.037 22:46:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 476330 00:07:07.037 22:46:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:07:07.037 22:46:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:07.037 22:46:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 476330 00:07:07.037 22:46:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:07.038 22:46:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:07.038 22:46:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 476330' 00:07:07.038 killing process with pid 476330 00:07:07.038 22:46:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 476330 00:07:07.038 22:46:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 476330 00:07:07.297 00:07:07.297 real 0m1.872s 00:07:07.297 user 0m5.295s 00:07:07.297 sys 0m0.412s 00:07:07.297 22:46:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:07.297 22:46:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:07.297 ************************************ 00:07:07.297 END TEST locking_overlapped_coremask 00:07:07.297 ************************************ 00:07:07.297 22:46:05 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:07.297 22:46:05 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:07.297 22:46:05 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:07.297 22:46:05 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:07.297 ************************************ 00:07:07.297 START TEST locking_overlapped_coremask_via_rpc 00:07:07.297 ************************************ 00:07:07.297 22:46:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:07:07.297 22:46:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=476627 00:07:07.297 22:46:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 476627 /var/tmp/spdk.sock 00:07:07.297 22:46:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:07.297 22:46:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 476627 ']' 00:07:07.297 22:46:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:07.297 22:46:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:07.297 22:46:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:07.297 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:07.297 22:46:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:07.297 22:46:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:07.297 [2024-07-24 22:46:05.449446] Starting SPDK v24.09-pre git sha1 f41dbc235 / DPDK 24.03.0 initialization... 00:07:07.297 [2024-07-24 22:46:05.449500] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid476627 ] 00:07:07.297 EAL: No free 2048 kB hugepages reported on node 1 00:07:07.556 [2024-07-24 22:46:05.519504] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:07.556 [2024-07-24 22:46:05.519535] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:07.556 [2024-07-24 22:46:05.604352] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:07.556 [2024-07-24 22:46:05.604453] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.556 [2024-07-24 22:46:05.604453] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:08.123 22:46:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:08.123 22:46:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:08.123 22:46:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=476801 00:07:08.123 22:46:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 476801 /var/tmp/spdk2.sock 00:07:08.123 22:46:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:08.123 22:46:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 476801 ']' 00:07:08.123 22:46:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:08.123 22:46:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:08.124 22:46:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:08.124 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:08.124 22:46:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:08.124 22:46:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:08.124 [2024-07-24 22:46:06.303492] Starting SPDK v24.09-pre git sha1 f41dbc235 / DPDK 24.03.0 initialization... 00:07:08.124 [2024-07-24 22:46:06.303565] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid476801 ] 00:07:08.383 EAL: No free 2048 kB hugepages reported on node 1 00:07:08.383 [2024-07-24 22:46:06.382182] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:08.383 [2024-07-24 22:46:06.382215] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:08.383 [2024-07-24 22:46:06.535335] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:08.383 [2024-07-24 22:46:06.535372] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:08.383 [2024-07-24 22:46:06.535374] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:07:08.956 22:46:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:08.956 22:46:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:08.956 22:46:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:08.956 22:46:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:08.956 22:46:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:08.956 22:46:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:08.956 22:46:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:08.956 22:46:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:07:08.956 22:46:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:08.956 22:46:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:07:08.956 22:46:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:08.956 22:46:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:07:08.956 22:46:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:08.956 22:46:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:08.956 22:46:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:08.956 22:46:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:08.956 [2024-07-24 22:46:07.145129] app.c: 772:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 476627 has claimed it. 00:07:08.956 request: 00:07:08.956 { 00:07:08.956 "method": "framework_enable_cpumask_locks", 00:07:08.956 "req_id": 1 00:07:08.956 } 00:07:08.956 Got JSON-RPC error response 00:07:08.956 response: 00:07:08.956 { 00:07:08.956 "code": -32603, 00:07:08.956 "message": "Failed to claim CPU core: 2" 00:07:08.956 } 00:07:08.956 22:46:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:07:08.956 22:46:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:07:08.956 22:46:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:08.956 22:46:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:08.956 22:46:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:08.956 22:46:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 476627 /var/tmp/spdk.sock 00:07:08.956 22:46:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 476627 ']' 00:07:08.956 22:46:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:08.956 22:46:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:08.956 22:46:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:08.956 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:08.956 22:46:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:08.956 22:46:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:09.269 22:46:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:09.269 22:46:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:09.269 22:46:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 476801 /var/tmp/spdk2.sock 00:07:09.269 22:46:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 476801 ']' 00:07:09.269 22:46:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:09.269 22:46:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:09.269 22:46:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:09.269 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:09.269 22:46:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:09.269 22:46:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:09.553 22:46:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:09.553 22:46:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:09.553 22:46:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:09.553 22:46:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:09.553 22:46:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:09.553 22:46:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:09.553 00:07:09.553 real 0m2.086s 00:07:09.553 user 0m0.843s 00:07:09.553 sys 0m0.178s 00:07:09.553 22:46:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:09.553 22:46:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:09.553 ************************************ 00:07:09.553 END TEST locking_overlapped_coremask_via_rpc 00:07:09.553 ************************************ 00:07:09.553 22:46:07 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:07:09.553 22:46:07 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 476627 ]] 00:07:09.553 22:46:07 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 476627 00:07:09.553 22:46:07 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 476627 ']' 00:07:09.553 22:46:07 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 476627 00:07:09.553 22:46:07 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:07:09.553 22:46:07 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:09.553 22:46:07 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 476627 00:07:09.553 22:46:07 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:09.553 22:46:07 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:09.553 22:46:07 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 476627' 00:07:09.553 killing process with pid 476627 00:07:09.553 22:46:07 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 476627 00:07:09.553 22:46:07 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 476627 00:07:09.864 22:46:07 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 476801 ]] 00:07:09.864 22:46:07 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 476801 00:07:09.864 22:46:07 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 476801 ']' 00:07:09.864 22:46:07 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 476801 00:07:09.864 22:46:07 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:07:09.864 22:46:07 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:09.864 22:46:07 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 476801 00:07:09.864 22:46:07 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:07:09.864 22:46:07 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:07:09.864 22:46:07 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 476801' 00:07:09.864 killing process with pid 476801 00:07:09.864 22:46:07 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 476801 00:07:09.864 22:46:07 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 476801 00:07:10.128 22:46:08 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:10.128 22:46:08 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:07:10.128 22:46:08 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 476627 ]] 00:07:10.128 22:46:08 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 476627 00:07:10.128 22:46:08 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 476627 ']' 00:07:10.128 22:46:08 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 476627 00:07:10.128 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (476627) - No such process 00:07:10.128 22:46:08 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 476627 is not found' 00:07:10.128 Process with pid 476627 is not found 00:07:10.128 22:46:08 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 476801 ]] 00:07:10.128 22:46:08 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 476801 00:07:10.128 22:46:08 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 476801 ']' 00:07:10.128 22:46:08 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 476801 00:07:10.128 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (476801) - No such process 00:07:10.128 22:46:08 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 476801 is not found' 00:07:10.128 Process with pid 476801 is not found 00:07:10.128 22:46:08 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:10.128 00:07:10.128 real 0m15.665s 00:07:10.128 user 0m27.611s 00:07:10.128 sys 0m4.558s 00:07:10.128 22:46:08 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:10.128 22:46:08 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:10.128 ************************************ 00:07:10.128 END TEST cpu_locks 00:07:10.128 ************************************ 00:07:10.128 00:07:10.128 real 0m40.898s 00:07:10.128 user 1m18.624s 00:07:10.128 sys 0m8.101s 00:07:10.128 22:46:08 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:10.128 22:46:08 event -- common/autotest_common.sh@10 -- # set +x 00:07:10.128 ************************************ 00:07:10.128 END TEST event 00:07:10.128 ************************************ 00:07:10.128 22:46:08 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/thread.sh 00:07:10.128 22:46:08 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:10.128 22:46:08 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:10.128 22:46:08 -- common/autotest_common.sh@10 -- # set +x 00:07:10.387 ************************************ 00:07:10.387 START TEST thread 00:07:10.387 ************************************ 00:07:10.387 22:46:08 thread -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/thread.sh 00:07:10.387 * Looking for test storage... 00:07:10.387 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread 00:07:10.387 22:46:08 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:10.387 22:46:08 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:07:10.387 22:46:08 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:10.387 22:46:08 thread -- common/autotest_common.sh@10 -- # set +x 00:07:10.387 ************************************ 00:07:10.387 START TEST thread_poller_perf 00:07:10.387 ************************************ 00:07:10.387 22:46:08 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:10.387 [2024-07-24 22:46:08.488878] Starting SPDK v24.09-pre git sha1 f41dbc235 / DPDK 24.03.0 initialization... 00:07:10.387 [2024-07-24 22:46:08.488969] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid477324 ] 00:07:10.387 EAL: No free 2048 kB hugepages reported on node 1 00:07:10.387 [2024-07-24 22:46:08.560242] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.645 [2024-07-24 22:46:08.632759] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.645 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:11.581 ====================================== 00:07:11.581 busy:2103997448 (cyc) 00:07:11.581 total_run_count: 850000 00:07:11.581 tsc_hz: 2100000000 (cyc) 00:07:11.581 ====================================== 00:07:11.581 poller_cost: 2475 (cyc), 1178 (nsec) 00:07:11.581 00:07:11.581 real 0m1.226s 00:07:11.581 user 0m1.130s 00:07:11.581 sys 0m0.092s 00:07:11.581 22:46:09 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:11.581 22:46:09 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:11.581 ************************************ 00:07:11.581 END TEST thread_poller_perf 00:07:11.581 ************************************ 00:07:11.581 22:46:09 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:11.581 22:46:09 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:07:11.581 22:46:09 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:11.581 22:46:09 thread -- common/autotest_common.sh@10 -- # set +x 00:07:11.581 ************************************ 00:07:11.581 START TEST thread_poller_perf 00:07:11.581 ************************************ 00:07:11.581 22:46:09 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:11.581 [2024-07-24 22:46:09.784041] Starting SPDK v24.09-pre git sha1 f41dbc235 / DPDK 24.03.0 initialization... 00:07:11.581 [2024-07-24 22:46:09.784146] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid477562 ] 00:07:11.839 EAL: No free 2048 kB hugepages reported on node 1 00:07:11.839 [2024-07-24 22:46:09.857294] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.839 [2024-07-24 22:46:09.928761] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.839 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:13.214 ====================================== 00:07:13.214 busy:2101039042 (cyc) 00:07:13.214 total_run_count: 13658000 00:07:13.214 tsc_hz: 2100000000 (cyc) 00:07:13.214 ====================================== 00:07:13.214 poller_cost: 153 (cyc), 72 (nsec) 00:07:13.214 00:07:13.214 real 0m1.228s 00:07:13.214 user 0m1.132s 00:07:13.214 sys 0m0.091s 00:07:13.214 22:46:10 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:13.214 22:46:10 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:13.214 ************************************ 00:07:13.214 END TEST thread_poller_perf 00:07:13.214 ************************************ 00:07:13.214 22:46:11 thread -- thread/thread.sh@17 -- # [[ n != \y ]] 00:07:13.214 22:46:11 thread -- thread/thread.sh@18 -- # run_test thread_spdk_lock /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/lock/spdk_lock 00:07:13.214 22:46:11 thread -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:13.214 22:46:11 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:13.214 22:46:11 thread -- common/autotest_common.sh@10 -- # set +x 00:07:13.214 ************************************ 00:07:13.214 START TEST thread_spdk_lock 00:07:13.214 ************************************ 00:07:13.214 22:46:11 thread.thread_spdk_lock -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/lock/spdk_lock 00:07:13.214 [2024-07-24 22:46:11.077278] Starting SPDK v24.09-pre git sha1 f41dbc235 / DPDK 24.03.0 initialization... 00:07:13.214 [2024-07-24 22:46:11.077344] [ DPDK EAL parameters: spdk_lock_test --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid477783 ] 00:07:13.214 EAL: No free 2048 kB hugepages reported on node 1 00:07:13.214 [2024-07-24 22:46:11.146704] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:13.214 [2024-07-24 22:46:11.225463] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.214 [2024-07-24 22:46:11.225463] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:13.780 [2024-07-24 22:46:11.712233] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c: 965:thread_execute_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:07:13.780 [2024-07-24 22:46:11.712267] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:3083:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 2: Deadlock detected (thread != sspin->thread) 00:07:13.780 [2024-07-24 22:46:11.712274] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:3038:sspin_stacks_print: *ERROR*: spinlock 0x14d5bc0 00:07:13.780 [2024-07-24 22:46:11.713139] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c: 860:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:07:13.780 [2024-07-24 22:46:11.713244] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:1026:thread_execute_timed_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:07:13.780 [2024-07-24 22:46:11.713261] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c: 860:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:07:13.780 Starting test contend 00:07:13.780 Worker Delay Wait us Hold us Total us 00:07:13.780 0 3 174390 185630 360020 00:07:13.780 1 5 91667 285654 377322 00:07:13.780 PASS test contend 00:07:13.780 Starting test hold_by_poller 00:07:13.780 PASS test hold_by_poller 00:07:13.780 Starting test hold_by_message 00:07:13.780 PASS test hold_by_message 00:07:13.780 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/lock/spdk_lock summary: 00:07:13.780 100014 assertions passed 00:07:13.780 0 assertions failed 00:07:13.780 00:07:13.780 real 0m0.713s 00:07:13.780 user 0m1.113s 00:07:13.780 sys 0m0.085s 00:07:13.780 22:46:11 thread.thread_spdk_lock -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:13.780 22:46:11 thread.thread_spdk_lock -- common/autotest_common.sh@10 -- # set +x 00:07:13.780 ************************************ 00:07:13.780 END TEST thread_spdk_lock 00:07:13.780 ************************************ 00:07:13.780 00:07:13.780 real 0m3.458s 00:07:13.780 user 0m3.485s 00:07:13.780 sys 0m0.471s 00:07:13.780 22:46:11 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:13.780 22:46:11 thread -- common/autotest_common.sh@10 -- # set +x 00:07:13.780 ************************************ 00:07:13.780 END TEST thread 00:07:13.780 ************************************ 00:07:13.780 22:46:11 -- spdk/autotest.sh@184 -- # [[ 0 -eq 1 ]] 00:07:13.780 22:46:11 -- spdk/autotest.sh@189 -- # run_test app_cmdline /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/cmdline.sh 00:07:13.780 22:46:11 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:13.780 22:46:11 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:13.780 22:46:11 -- common/autotest_common.sh@10 -- # set +x 00:07:13.780 ************************************ 00:07:13.780 START TEST app_cmdline 00:07:13.780 ************************************ 00:07:13.780 22:46:11 app_cmdline -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/cmdline.sh 00:07:13.780 * Looking for test storage... 00:07:13.780 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:07:13.780 22:46:11 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:13.780 22:46:11 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=477865 00:07:13.780 22:46:11 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 477865 00:07:13.780 22:46:11 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:13.780 22:46:11 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 477865 ']' 00:07:13.780 22:46:11 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:13.780 22:46:11 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:13.780 22:46:11 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:13.780 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:13.780 22:46:11 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:13.780 22:46:11 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:14.039 [2024-07-24 22:46:11.988751] Starting SPDK v24.09-pre git sha1 f41dbc235 / DPDK 24.03.0 initialization... 00:07:14.039 [2024-07-24 22:46:11.988832] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid477865 ] 00:07:14.039 EAL: No free 2048 kB hugepages reported on node 1 00:07:14.039 [2024-07-24 22:46:12.057134] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.039 [2024-07-24 22:46:12.138847] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.604 22:46:12 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:14.604 22:46:12 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:07:14.604 22:46:12 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:07:14.863 { 00:07:14.863 "version": "SPDK v24.09-pre git sha1 f41dbc235", 00:07:14.863 "fields": { 00:07:14.863 "major": 24, 00:07:14.863 "minor": 9, 00:07:14.863 "patch": 0, 00:07:14.863 "suffix": "-pre", 00:07:14.863 "commit": "f41dbc235" 00:07:14.863 } 00:07:14.863 } 00:07:14.863 22:46:12 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:14.863 22:46:12 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:14.863 22:46:12 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:14.863 22:46:12 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:14.863 22:46:12 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:14.863 22:46:12 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:14.863 22:46:12 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.863 22:46:12 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:14.863 22:46:12 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:14.863 22:46:12 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.863 22:46:12 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:14.863 22:46:12 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:14.863 22:46:12 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:14.863 22:46:12 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:07:14.863 22:46:12 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:14.863 22:46:12 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:07:14.863 22:46:13 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:14.863 22:46:13 app_cmdline -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:07:14.863 22:46:13 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:14.863 22:46:13 app_cmdline -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:07:14.863 22:46:13 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:14.863 22:46:13 app_cmdline -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:07:14.863 22:46:13 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py ]] 00:07:14.863 22:46:13 app_cmdline -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:15.122 request: 00:07:15.122 { 00:07:15.122 "method": "env_dpdk_get_mem_stats", 00:07:15.122 "req_id": 1 00:07:15.122 } 00:07:15.122 Got JSON-RPC error response 00:07:15.122 response: 00:07:15.122 { 00:07:15.122 "code": -32601, 00:07:15.122 "message": "Method not found" 00:07:15.122 } 00:07:15.122 22:46:13 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:07:15.122 22:46:13 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:15.122 22:46:13 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:15.122 22:46:13 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:15.122 22:46:13 app_cmdline -- app/cmdline.sh@1 -- # killprocess 477865 00:07:15.122 22:46:13 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 477865 ']' 00:07:15.122 22:46:13 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 477865 00:07:15.122 22:46:13 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:07:15.122 22:46:13 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:15.122 22:46:13 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 477865 00:07:15.122 22:46:13 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:15.122 22:46:13 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:15.122 22:46:13 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 477865' 00:07:15.122 killing process with pid 477865 00:07:15.122 22:46:13 app_cmdline -- common/autotest_common.sh@969 -- # kill 477865 00:07:15.122 22:46:13 app_cmdline -- common/autotest_common.sh@974 -- # wait 477865 00:07:15.382 00:07:15.382 real 0m1.643s 00:07:15.382 user 0m1.934s 00:07:15.382 sys 0m0.433s 00:07:15.382 22:46:13 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:15.382 22:46:13 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:15.382 ************************************ 00:07:15.382 END TEST app_cmdline 00:07:15.382 ************************************ 00:07:15.382 22:46:13 -- spdk/autotest.sh@190 -- # run_test version /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/version.sh 00:07:15.382 22:46:13 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:15.382 22:46:13 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:15.382 22:46:13 -- common/autotest_common.sh@10 -- # set +x 00:07:15.382 ************************************ 00:07:15.382 START TEST version 00:07:15.382 ************************************ 00:07:15.382 22:46:13 version -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/version.sh 00:07:15.641 * Looking for test storage... 00:07:15.641 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:07:15.641 22:46:13 version -- app/version.sh@17 -- # get_header_version major 00:07:15.641 22:46:13 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:07:15.641 22:46:13 version -- app/version.sh@14 -- # cut -f2 00:07:15.641 22:46:13 version -- app/version.sh@14 -- # tr -d '"' 00:07:15.641 22:46:13 version -- app/version.sh@17 -- # major=24 00:07:15.641 22:46:13 version -- app/version.sh@18 -- # get_header_version minor 00:07:15.641 22:46:13 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:07:15.641 22:46:13 version -- app/version.sh@14 -- # cut -f2 00:07:15.641 22:46:13 version -- app/version.sh@14 -- # tr -d '"' 00:07:15.641 22:46:13 version -- app/version.sh@18 -- # minor=9 00:07:15.641 22:46:13 version -- app/version.sh@19 -- # get_header_version patch 00:07:15.641 22:46:13 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:07:15.641 22:46:13 version -- app/version.sh@14 -- # cut -f2 00:07:15.641 22:46:13 version -- app/version.sh@14 -- # tr -d '"' 00:07:15.641 22:46:13 version -- app/version.sh@19 -- # patch=0 00:07:15.641 22:46:13 version -- app/version.sh@20 -- # get_header_version suffix 00:07:15.641 22:46:13 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:07:15.641 22:46:13 version -- app/version.sh@14 -- # cut -f2 00:07:15.641 22:46:13 version -- app/version.sh@14 -- # tr -d '"' 00:07:15.641 22:46:13 version -- app/version.sh@20 -- # suffix=-pre 00:07:15.641 22:46:13 version -- app/version.sh@22 -- # version=24.9 00:07:15.641 22:46:13 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:15.641 22:46:13 version -- app/version.sh@28 -- # version=24.9rc0 00:07:15.641 22:46:13 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:07:15.641 22:46:13 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:15.641 22:46:13 version -- app/version.sh@30 -- # py_version=24.9rc0 00:07:15.642 22:46:13 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:07:15.642 00:07:15.642 real 0m0.158s 00:07:15.642 user 0m0.087s 00:07:15.642 sys 0m0.106s 00:07:15.642 22:46:13 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:15.642 22:46:13 version -- common/autotest_common.sh@10 -- # set +x 00:07:15.642 ************************************ 00:07:15.642 END TEST version 00:07:15.642 ************************************ 00:07:15.642 22:46:13 -- spdk/autotest.sh@192 -- # '[' 0 -eq 1 ']' 00:07:15.642 22:46:13 -- spdk/autotest.sh@202 -- # uname -s 00:07:15.642 22:46:13 -- spdk/autotest.sh@202 -- # [[ Linux == Linux ]] 00:07:15.642 22:46:13 -- spdk/autotest.sh@203 -- # [[ 0 -eq 1 ]] 00:07:15.642 22:46:13 -- spdk/autotest.sh@203 -- # [[ 0 -eq 1 ]] 00:07:15.642 22:46:13 -- spdk/autotest.sh@215 -- # '[' 0 -eq 1 ']' 00:07:15.642 22:46:13 -- spdk/autotest.sh@260 -- # '[' 0 -eq 1 ']' 00:07:15.642 22:46:13 -- spdk/autotest.sh@264 -- # timing_exit lib 00:07:15.642 22:46:13 -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:15.642 22:46:13 -- common/autotest_common.sh@10 -- # set +x 00:07:15.642 22:46:13 -- spdk/autotest.sh@266 -- # '[' 0 -eq 1 ']' 00:07:15.642 22:46:13 -- spdk/autotest.sh@274 -- # '[' 0 -eq 1 ']' 00:07:15.642 22:46:13 -- spdk/autotest.sh@283 -- # '[' 0 -eq 1 ']' 00:07:15.642 22:46:13 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:07:15.642 22:46:13 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:07:15.642 22:46:13 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:07:15.642 22:46:13 -- spdk/autotest.sh@325 -- # '[' 0 -eq 1 ']' 00:07:15.642 22:46:13 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:07:15.642 22:46:13 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:07:15.642 22:46:13 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:07:15.642 22:46:13 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:07:15.642 22:46:13 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:07:15.642 22:46:13 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:07:15.642 22:46:13 -- spdk/autotest.sh@360 -- # '[' 0 -eq 1 ']' 00:07:15.642 22:46:13 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:07:15.642 22:46:13 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:07:15.642 22:46:13 -- spdk/autotest.sh@375 -- # [[ 1 -eq 1 ]] 00:07:15.642 22:46:13 -- spdk/autotest.sh@376 -- # run_test llvm_fuzz /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm.sh 00:07:15.642 22:46:13 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:15.642 22:46:13 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:15.642 22:46:13 -- common/autotest_common.sh@10 -- # set +x 00:07:15.901 ************************************ 00:07:15.901 START TEST llvm_fuzz 00:07:15.901 ************************************ 00:07:15.901 22:46:13 llvm_fuzz -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm.sh 00:07:15.901 * Looking for test storage... 00:07:15.901 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz 00:07:15.901 22:46:13 llvm_fuzz -- fuzz/llvm.sh@11 -- # fuzzers=($(get_fuzzer_targets)) 00:07:15.901 22:46:13 llvm_fuzz -- fuzz/llvm.sh@11 -- # get_fuzzer_targets 00:07:15.901 22:46:13 llvm_fuzz -- common/autotest_common.sh@548 -- # fuzzers=() 00:07:15.901 22:46:13 llvm_fuzz -- common/autotest_common.sh@548 -- # local fuzzers 00:07:15.901 22:46:13 llvm_fuzz -- common/autotest_common.sh@550 -- # [[ -n '' ]] 00:07:15.901 22:46:13 llvm_fuzz -- common/autotest_common.sh@553 -- # fuzzers=("$rootdir/test/fuzz/llvm/"*) 00:07:15.901 22:46:13 llvm_fuzz -- common/autotest_common.sh@554 -- # fuzzers=("${fuzzers[@]##*/}") 00:07:15.901 22:46:13 llvm_fuzz -- common/autotest_common.sh@557 -- # echo 'common.sh llvm-gcov.sh nvmf vfio' 00:07:15.901 22:46:13 llvm_fuzz -- fuzz/llvm.sh@13 -- # llvm_out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm 00:07:15.901 22:46:13 llvm_fuzz -- fuzz/llvm.sh@15 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/coverage 00:07:15.901 22:46:13 llvm_fuzz -- fuzz/llvm.sh@56 -- # [[ 1 -eq 0 ]] 00:07:15.901 22:46:13 llvm_fuzz -- fuzz/llvm.sh@60 -- # for fuzzer in "${fuzzers[@]}" 00:07:15.901 22:46:13 llvm_fuzz -- fuzz/llvm.sh@61 -- # case "$fuzzer" in 00:07:15.901 22:46:13 llvm_fuzz -- fuzz/llvm.sh@60 -- # for fuzzer in "${fuzzers[@]}" 00:07:15.901 22:46:13 llvm_fuzz -- fuzz/llvm.sh@61 -- # case "$fuzzer" in 00:07:15.901 22:46:13 llvm_fuzz -- fuzz/llvm.sh@60 -- # for fuzzer in "${fuzzers[@]}" 00:07:15.901 22:46:13 llvm_fuzz -- fuzz/llvm.sh@61 -- # case "$fuzzer" in 00:07:15.901 22:46:13 llvm_fuzz -- fuzz/llvm.sh@62 -- # run_test nvmf_llvm_fuzz /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/run.sh 00:07:15.901 22:46:13 llvm_fuzz -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:15.901 22:46:13 llvm_fuzz -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:15.901 22:46:13 llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:07:15.901 ************************************ 00:07:15.901 START TEST nvmf_llvm_fuzz 00:07:15.901 ************************************ 00:07:15.901 22:46:13 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/run.sh 00:07:15.901 * Looking for test storage... 00:07:15.901 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:07:15.901 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@60 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/common.sh 00:07:15.901 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- setup/common.sh@6 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh 00:07:15.901 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:07:15.901 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@34 -- # set -e 00:07:15.901 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:07:15.901 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@36 -- # shopt -s extglob 00:07:15.901 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:07:15.901 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output ']' 00:07:15.901 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh ]] 00:07:15.901 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh 00:07:15.901 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:15.901 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:15.901 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:15.901 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:15.901 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:07:15.901 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:15.901 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:15.901 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:15.901 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:15.901 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:15.901 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:15.901 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:15.901 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:15.901 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:15.901 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:15.901 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:15.901 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:07:15.901 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:15.901 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:07:15.901 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:07:15.901 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:07:15.901 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@22 -- # CONFIG_CET=n 00:07:15.901 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:15.901 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:07:15.901 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:07:15.901 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:15.901 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:15.902 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:07:15.902 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:07:15.902 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:07:15.902 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:07:15.902 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:07:15.902 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:07:15.902 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB=/usr/lib64/clang/16/lib/libclang_rt.fuzzer_no_main-x86_64.a 00:07:15.902 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@35 -- # CONFIG_FUZZER=y 00:07:15.902 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:07:15.902 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:07:15.902 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:07:15.902 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:07:15.902 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:07:15.902 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:07:15.902 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:07:15.902 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:07:15.902 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:15.902 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:07:15.902 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:07:15.902 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:07:15.902 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:07:15.902 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:15.902 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:07:15.902 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:07:15.902 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:07:15.902 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:07:15.902 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:07:15.902 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:07:15.902 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:07:15.902 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:07:15.902 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:07:15.902 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:07:15.902 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:07:15.902 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:07:15.902 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:07:15.902 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:07:15.902 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:07:15.902 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:07:15.902 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@66 -- # CONFIG_SHARED=n 00:07:15.902 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:07:15.902 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:07:15.902 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:15.902 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@70 -- # CONFIG_FC=n 00:07:15.902 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:07:15.902 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:07:15.902 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:07:15.902 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:07:15.902 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:07:15.902 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:07:15.902 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:07:15.902 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:07:15.902 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:07:15.902 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:07:15.902 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:15.902 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:07:15.902 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@83 -- # CONFIG_URING=n 00:07:15.902 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:07:15.902 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:07:15.902 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:07:15.902 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:07:15.902 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:07:15.902 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:07:15.902 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:07:15.902 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:07:15.902 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:07:15.902 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:07:15.902 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:07:15.902 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:07:15.902 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:07:15.902 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:07:15.902 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/config.h ]] 00:07:15.902 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:07:15.902 #define SPDK_CONFIG_H 00:07:15.902 #define SPDK_CONFIG_APPS 1 00:07:15.902 #define SPDK_CONFIG_ARCH native 00:07:15.902 #undef SPDK_CONFIG_ASAN 00:07:15.902 #undef SPDK_CONFIG_AVAHI 00:07:15.902 #undef SPDK_CONFIG_CET 00:07:15.902 #define SPDK_CONFIG_COVERAGE 1 00:07:15.902 #define SPDK_CONFIG_CROSS_PREFIX 00:07:15.902 #undef SPDK_CONFIG_CRYPTO 00:07:15.902 #undef SPDK_CONFIG_CRYPTO_MLX5 00:07:15.902 #undef SPDK_CONFIG_CUSTOMOCF 00:07:15.902 #undef SPDK_CONFIG_DAOS 00:07:15.902 #define SPDK_CONFIG_DAOS_DIR 00:07:15.902 #define SPDK_CONFIG_DEBUG 1 00:07:15.902 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:07:15.902 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:07:15.902 #define SPDK_CONFIG_DPDK_INC_DIR 00:07:15.902 #define SPDK_CONFIG_DPDK_LIB_DIR 00:07:15.902 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:07:15.902 #undef SPDK_CONFIG_DPDK_UADK 00:07:15.902 #define SPDK_CONFIG_ENV /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:07:15.902 #define SPDK_CONFIG_EXAMPLES 1 00:07:15.902 #undef SPDK_CONFIG_FC 00:07:15.902 #define SPDK_CONFIG_FC_PATH 00:07:15.902 #define SPDK_CONFIG_FIO_PLUGIN 1 00:07:15.902 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:07:15.902 #undef SPDK_CONFIG_FUSE 00:07:15.902 #define SPDK_CONFIG_FUZZER 1 00:07:15.902 #define SPDK_CONFIG_FUZZER_LIB /usr/lib64/clang/16/lib/libclang_rt.fuzzer_no_main-x86_64.a 00:07:15.902 #undef SPDK_CONFIG_GOLANG 00:07:15.902 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:07:15.902 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:07:15.902 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:07:15.902 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:07:15.902 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:07:15.902 #undef SPDK_CONFIG_HAVE_LIBBSD 00:07:15.902 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:07:15.902 #define SPDK_CONFIG_IDXD 1 00:07:15.902 #define SPDK_CONFIG_IDXD_KERNEL 1 00:07:15.902 #undef SPDK_CONFIG_IPSEC_MB 00:07:15.902 #define SPDK_CONFIG_IPSEC_MB_DIR 00:07:15.902 #define SPDK_CONFIG_ISAL 1 00:07:15.902 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:07:15.902 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:07:15.902 #define SPDK_CONFIG_LIBDIR 00:07:15.902 #undef SPDK_CONFIG_LTO 00:07:15.902 #define SPDK_CONFIG_MAX_LCORES 128 00:07:15.902 #define SPDK_CONFIG_NVME_CUSE 1 00:07:15.902 #undef SPDK_CONFIG_OCF 00:07:15.902 #define SPDK_CONFIG_OCF_PATH 00:07:15.902 #define SPDK_CONFIG_OPENSSL_PATH 00:07:15.902 #undef SPDK_CONFIG_PGO_CAPTURE 00:07:15.902 #define SPDK_CONFIG_PGO_DIR 00:07:15.902 #undef SPDK_CONFIG_PGO_USE 00:07:15.902 #define SPDK_CONFIG_PREFIX /usr/local 00:07:15.902 #undef SPDK_CONFIG_RAID5F 00:07:15.902 #undef SPDK_CONFIG_RBD 00:07:15.902 #define SPDK_CONFIG_RDMA 1 00:07:15.902 #define SPDK_CONFIG_RDMA_PROV verbs 00:07:15.902 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:07:15.902 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:07:15.902 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:07:15.902 #undef SPDK_CONFIG_SHARED 00:07:15.902 #undef SPDK_CONFIG_SMA 00:07:15.902 #define SPDK_CONFIG_TESTS 1 00:07:15.902 #undef SPDK_CONFIG_TSAN 00:07:15.902 #define SPDK_CONFIG_UBLK 1 00:07:15.902 #define SPDK_CONFIG_UBSAN 1 00:07:15.902 #undef SPDK_CONFIG_UNIT_TESTS 00:07:15.902 #undef SPDK_CONFIG_URING 00:07:15.903 #define SPDK_CONFIG_URING_PATH 00:07:15.903 #undef SPDK_CONFIG_URING_ZNS 00:07:15.903 #undef SPDK_CONFIG_USDT 00:07:15.903 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:07:15.903 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:07:15.903 #define SPDK_CONFIG_VFIO_USER 1 00:07:15.903 #define SPDK_CONFIG_VFIO_USER_DIR 00:07:15.903 #define SPDK_CONFIG_VHOST 1 00:07:15.903 #define SPDK_CONFIG_VIRTIO 1 00:07:15.903 #undef SPDK_CONFIG_VTUNE 00:07:15.903 #define SPDK_CONFIG_VTUNE_DIR 00:07:15.903 #define SPDK_CONFIG_WERROR 1 00:07:15.903 #define SPDK_CONFIG_WPDK_DIR 00:07:15.903 #undef SPDK_CONFIG_XNVME 00:07:15.903 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:07:15.903 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:07:15.903 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:07:15.903 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:16.164 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:16.164 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:16.164 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:16.164 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:16.164 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:16.164 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- paths/export.sh@5 -- # export PATH 00:07:16.164 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:16.164 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:07:16.164 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@6 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:07:16.164 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@6 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:07:16.164 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:07:16.164 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@7 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/../../../ 00:07:16.164 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:07:16.164 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@64 -- # TEST_TAG=N/A 00:07:16.164 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.run_test_name 00:07:16.164 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power 00:07:16.164 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@68 -- # uname -s 00:07:16.164 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@68 -- # PM_OS=Linux 00:07:16.164 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:07:16.164 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:07:16.164 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:07:16.164 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:07:16.164 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:07:16.164 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:07:16.164 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@76 -- # SUDO[0]= 00:07:16.164 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@76 -- # SUDO[1]='sudo -E' 00:07:16.164 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:07:16.164 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:07:16.164 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@81 -- # [[ Linux == Linux ]] 00:07:16.164 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:07:16.164 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:07:16.164 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:07:16.164 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:07:16.164 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power ]] 00:07:16.164 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@58 -- # : 0 00:07:16.164 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:07:16.164 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@62 -- # : 0 00:07:16.164 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:07:16.164 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@64 -- # : 0 00:07:16.165 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:07:16.165 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@66 -- # : 1 00:07:16.165 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:07:16.165 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@68 -- # : 0 00:07:16.165 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:07:16.165 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@70 -- # : 00:07:16.165 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:07:16.165 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@72 -- # : 0 00:07:16.165 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:07:16.165 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@74 -- # : 0 00:07:16.165 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:07:16.165 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@76 -- # : 0 00:07:16.165 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:07:16.165 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@78 -- # : 0 00:07:16.165 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:07:16.165 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@80 -- # : 0 00:07:16.165 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:07:16.165 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@82 -- # : 0 00:07:16.165 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:07:16.165 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@84 -- # : 0 00:07:16.165 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:07:16.165 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@86 -- # : 0 00:07:16.165 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:07:16.165 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@88 -- # : 0 00:07:16.165 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:07:16.165 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@90 -- # : 0 00:07:16.165 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:07:16.165 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@92 -- # : 0 00:07:16.165 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:07:16.165 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@94 -- # : 0 00:07:16.165 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:07:16.165 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@96 -- # : 0 00:07:16.165 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:07:16.165 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@98 -- # : 1 00:07:16.165 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:07:16.165 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@100 -- # : 1 00:07:16.165 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:07:16.165 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@102 -- # : rdma 00:07:16.165 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:07:16.165 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@104 -- # : 0 00:07:16.165 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:07:16.165 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@106 -- # : 0 00:07:16.165 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:07:16.165 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@108 -- # : 0 00:07:16.165 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:07:16.165 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@110 -- # : 0 00:07:16.165 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:07:16.165 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@112 -- # : 0 00:07:16.165 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:07:16.165 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@114 -- # : 0 00:07:16.165 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:07:16.165 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@116 -- # : 0 00:07:16.165 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:07:16.165 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@118 -- # : 0 00:07:16.165 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:07:16.165 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@120 -- # : 0 00:07:16.165 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:07:16.165 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@122 -- # : 1 00:07:16.165 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:07:16.165 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@124 -- # : 00:07:16.165 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:07:16.165 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@126 -- # : 0 00:07:16.165 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:07:16.165 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@128 -- # : 0 00:07:16.165 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:07:16.165 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@130 -- # : 0 00:07:16.165 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:07:16.165 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@132 -- # : 0 00:07:16.165 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:07:16.165 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@134 -- # : 0 00:07:16.165 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:07:16.165 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@136 -- # : 0 00:07:16.165 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:07:16.165 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@138 -- # : 00:07:16.165 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:07:16.165 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@140 -- # : true 00:07:16.165 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:07:16.165 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@142 -- # : 0 00:07:16.165 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:07:16.165 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@144 -- # : 0 00:07:16.165 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:07:16.165 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@146 -- # : 0 00:07:16.165 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:07:16.165 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@148 -- # : 0 00:07:16.165 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:07:16.165 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@150 -- # : 0 00:07:16.165 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:07:16.165 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@152 -- # : 0 00:07:16.165 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:07:16.165 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@154 -- # : 00:07:16.165 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:07:16.165 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@156 -- # : 0 00:07:16.165 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:07:16.165 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@158 -- # : 0 00:07:16.165 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:07:16.165 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@160 -- # : 0 00:07:16.165 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:07:16.165 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@162 -- # : 0 00:07:16.165 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:07:16.165 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@164 -- # : 0 00:07:16.165 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:07:16.165 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@166 -- # : 0 00:07:16.165 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:07:16.165 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@169 -- # : 00:07:16.165 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:07:16.165 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@171 -- # : 0 00:07:16.165 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:07:16.165 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@173 -- # : 0 00:07:16.165 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:07:16.165 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@177 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:07:16.165 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@177 -- # SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:07:16.165 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@178 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:07:16.165 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@178 -- # DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:07:16.165 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@179 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:16.165 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@179 -- # VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:16.166 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@180 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:16.166 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@180 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:16.166 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@183 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:07:16.166 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@183 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:07:16.166 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@187 -- # export PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:07:16.166 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@187 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:07:16.166 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@191 -- # export PYTHONDONTWRITEBYTECODE=1 00:07:16.166 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@191 -- # PYTHONDONTWRITEBYTECODE=1 00:07:16.166 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@195 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:16.166 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@195 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:16.166 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@196 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:16.166 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@196 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:16.166 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@200 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:07:16.166 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@201 -- # rm -rf /var/tmp/asan_suppression_file 00:07:16.166 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@202 -- # cat 00:07:16.166 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@238 -- # echo leak:libfuse3.so 00:07:16.166 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@240 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:16.166 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@240 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:16.166 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@242 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:16.166 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@242 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:16.166 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@244 -- # '[' -z /var/spdk/dependencies ']' 00:07:16.166 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@247 -- # export DEPENDENCY_DIR 00:07:16.166 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@251 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:07:16.166 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@251 -- # SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:07:16.166 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@252 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:07:16.166 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@252 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:07:16.166 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@255 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:16.166 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@255 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:16.166 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@256 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:16.166 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@256 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:16.166 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@258 -- # export AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:16.166 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@258 -- # AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:16.166 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@261 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:16.166 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@261 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:16.166 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@264 -- # '[' 0 -eq 0 ']' 00:07:16.166 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@265 -- # export valgrind= 00:07:16.166 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@265 -- # valgrind= 00:07:16.166 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@271 -- # uname -s 00:07:16.166 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@271 -- # '[' Linux = Linux ']' 00:07:16.166 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@272 -- # HUGEMEM=4096 00:07:16.166 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@273 -- # export CLEAR_HUGE=yes 00:07:16.166 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@273 -- # CLEAR_HUGE=yes 00:07:16.166 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@274 -- # [[ 0 -eq 1 ]] 00:07:16.166 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@274 -- # [[ 0 -eq 1 ]] 00:07:16.166 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@281 -- # MAKE=make 00:07:16.166 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@282 -- # MAKEFLAGS=-j88 00:07:16.166 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@298 -- # export HUGEMEM=4096 00:07:16.166 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@298 -- # HUGEMEM=4096 00:07:16.166 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@300 -- # NO_HUGE=() 00:07:16.166 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@301 -- # TEST_MODE= 00:07:16.166 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@320 -- # [[ -z 478441 ]] 00:07:16.166 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@320 -- # kill -0 478441 00:07:16.166 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:07:16.166 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@330 -- # [[ -v testdir ]] 00:07:16.166 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@332 -- # local requested_size=2147483648 00:07:16.166 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@333 -- # local mount target_dir 00:07:16.166 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@335 -- # local -A mounts fss sizes avails uses 00:07:16.166 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@336 -- # local source fs size avail mount use 00:07:16.166 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@338 -- # local storage_fallback storage_candidates 00:07:16.166 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@340 -- # mktemp -udt spdk.XXXXXX 00:07:16.166 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@340 -- # storage_fallback=/tmp/spdk.QaqT20 00:07:16.166 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@345 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:07:16.166 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@347 -- # [[ -n '' ]] 00:07:16.166 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@352 -- # [[ -n '' ]] 00:07:16.166 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@357 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf /tmp/spdk.QaqT20/tests/nvmf /tmp/spdk.QaqT20 00:07:16.166 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@360 -- # requested_size=2214592512 00:07:16.166 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:07:16.166 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@329 -- # df -T 00:07:16.166 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@329 -- # grep -v Filesystem 00:07:16.166 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@363 -- # mounts["$mount"]=spdk_devtmpfs 00:07:16.166 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@363 -- # fss["$mount"]=devtmpfs 00:07:16.166 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@364 -- # avails["$mount"]=67108864 00:07:16.166 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@364 -- # sizes["$mount"]=67108864 00:07:16.166 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@365 -- # uses["$mount"]=0 00:07:16.166 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:07:16.166 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@363 -- # mounts["$mount"]=/dev/pmem0 00:07:16.166 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@363 -- # fss["$mount"]=ext2 00:07:16.166 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@364 -- # avails["$mount"]=948682752 00:07:16.166 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@364 -- # sizes["$mount"]=5284429824 00:07:16.166 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@365 -- # uses["$mount"]=4335747072 00:07:16.166 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:07:16.166 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@363 -- # mounts["$mount"]=spdk_root 00:07:16.166 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@363 -- # fss["$mount"]=overlay 00:07:16.166 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@364 -- # avails["$mount"]=54596263936 00:07:16.166 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@364 -- # sizes["$mount"]=61742043136 00:07:16.166 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@365 -- # uses["$mount"]=7145779200 00:07:16.166 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:07:16.166 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:07:16.167 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:07:16.167 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@364 -- # avails["$mount"]=30867644416 00:07:16.167 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@364 -- # sizes["$mount"]=30871019520 00:07:16.167 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@365 -- # uses["$mount"]=3375104 00:07:16.167 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:07:16.167 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:07:16.167 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:07:16.167 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@364 -- # avails["$mount"]=12342374400 00:07:16.167 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@364 -- # sizes["$mount"]=12348411904 00:07:16.167 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@365 -- # uses["$mount"]=6037504 00:07:16.167 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:07:16.167 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:07:16.167 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:07:16.167 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@364 -- # avails["$mount"]=30870708224 00:07:16.167 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@364 -- # sizes["$mount"]=30871023616 00:07:16.167 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@365 -- # uses["$mount"]=315392 00:07:16.167 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:07:16.167 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:07:16.167 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:07:16.167 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@364 -- # avails["$mount"]=6174199808 00:07:16.167 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@364 -- # sizes["$mount"]=6174203904 00:07:16.167 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@365 -- # uses["$mount"]=4096 00:07:16.167 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:07:16.167 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@368 -- # printf '* Looking for test storage...\n' 00:07:16.167 * Looking for test storage... 00:07:16.167 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@370 -- # local target_space new_size 00:07:16.167 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@371 -- # for target_dir in "${storage_candidates[@]}" 00:07:16.167 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # df /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:07:16.167 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # awk '$1 !~ /Filesystem/{print $6}' 00:07:16.167 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # mount=/ 00:07:16.167 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@376 -- # target_space=54596263936 00:07:16.167 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@377 -- # (( target_space == 0 || target_space < requested_size )) 00:07:16.167 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@380 -- # (( target_space >= requested_size )) 00:07:16.167 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@382 -- # [[ overlay == tmpfs ]] 00:07:16.167 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@382 -- # [[ overlay == ramfs ]] 00:07:16.167 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@382 -- # [[ / == / ]] 00:07:16.167 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@383 -- # new_size=9360371712 00:07:16.167 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@384 -- # (( new_size * 100 / sizes[/] > 95 )) 00:07:16.167 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@389 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:07:16.167 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@389 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:07:16.167 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@390 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:07:16.167 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:07:16.167 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@391 -- # return 0 00:07:16.167 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1682 -- # set -o errtrace 00:07:16.167 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:07:16.167 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:07:16.167 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:07:16.167 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1687 -- # true 00:07:16.167 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1689 -- # xtrace_fd 00:07:16.167 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:07:16.167 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:07:16.167 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@27 -- # exec 00:07:16.167 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@29 -- # exec 00:07:16.167 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@31 -- # xtrace_restore 00:07:16.167 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:07:16.167 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:07:16.167 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@18 -- # set -x 00:07:16.167 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@61 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/../common.sh 00:07:16.167 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@8 -- # pids=() 00:07:16.167 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@63 -- # fuzzfile=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c 00:07:16.167 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@64 -- # grep -c '\.fn =' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c 00:07:16.167 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@64 -- # fuzz_num=25 00:07:16.167 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@65 -- # (( fuzz_num != 0 )) 00:07:16.167 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@67 -- # trap 'cleanup /tmp/llvm_fuzz* /var/tmp/suppress_nvmf_fuzz; exit 1' SIGINT SIGTERM EXIT 00:07:16.167 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@69 -- # mem_size=512 00:07:16.167 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@70 -- # [[ 1 -eq 1 ]] 00:07:16.167 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@71 -- # start_llvm_fuzz_short 25 1 00:07:16.167 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@69 -- # local fuzz_num=25 00:07:16.167 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@70 -- # local time=1 00:07:16.167 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i = 0 )) 00:07:16.167 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:16.167 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 0 1 0x1 00:07:16.167 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=0 00:07:16.167 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:16.167 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:16.167 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 00:07:16.167 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_0.conf 00:07:16.167 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:16.167 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:16.167 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 0 00:07:16.167 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4400 00:07:16.167 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 00:07:16.167 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4400' 00:07:16.167 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4400"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:16.167 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:16.167 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:16.167 22:46:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4400' -c /tmp/fuzz_json_0.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 -Z 0 00:07:16.167 [2024-07-24 22:46:14.265094] Starting SPDK v24.09-pre git sha1 f41dbc235 / DPDK 24.03.0 initialization... 00:07:16.167 [2024-07-24 22:46:14.265175] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid478490 ] 00:07:16.167 EAL: No free 2048 kB hugepages reported on node 1 00:07:16.426 [2024-07-24 22:46:14.518954] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.426 [2024-07-24 22:46:14.602748] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.685 [2024-07-24 22:46:14.661410] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:16.685 [2024-07-24 22:46:14.677653] tcp.c:1007:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4400 *** 00:07:16.685 INFO: Running with entropic power schedule (0xFF, 100). 00:07:16.685 INFO: Seed: 884394447 00:07:16.685 INFO: Loaded 1 modules (358985 inline 8-bit counters): 358985 [0x29c620c, 0x2a1dc55), 00:07:16.685 INFO: Loaded 1 PC tables (358985 PCs): 358985 [0x2a1dc58,0x2f980e8), 00:07:16.685 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 00:07:16.685 INFO: A corpus is not provided, starting from an empty corpus 00:07:16.685 #2 INITED exec/s: 0 rss: 65Mb 00:07:16.685 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:16.685 This may also happen if the target rejected all inputs we tried so far 00:07:16.685 [2024-07-24 22:46:14.722312] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:07:16.685 [2024-07-24 22:46:14.722343] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:16.942 NEW_FUNC[1/699]: 0x483e80 in fuzz_admin_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:47 00:07:16.942 NEW_FUNC[2/699]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:16.942 #6 NEW cov: 11942 ft: 11939 corp: 2/82b lim: 320 exec/s: 0 rss: 71Mb L: 81/81 MS: 4 ShuffleBytes-ChangeByte-ChangeByte-InsertRepeatedBytes- 00:07:16.942 [2024-07-24 22:46:14.925017] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.942 [2024-07-24 22:46:14.925063] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:16.942 NEW_FUNC[1/1]: 0x17c8ee0 in nvme_get_sgl /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_qpair.c:159 00:07:16.942 #15 NEW cov: 12095 ft: 12777 corp: 3/173b lim: 320 exec/s: 0 rss: 71Mb L: 91/91 MS: 4 CrossOver-ChangeBinInt-CrossOver-InsertRepeatedBytes- 00:07:16.942 [2024-07-24 22:46:14.985232] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:07:16.942 [2024-07-24 22:46:14.985262] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:16.942 #16 NEW cov: 12101 ft: 13027 corp: 4/258b lim: 320 exec/s: 0 rss: 71Mb L: 85/91 MS: 1 CMP- DE: "\017\000\000\000"- 00:07:16.942 [2024-07-24 22:46:15.055384] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.942 [2024-07-24 22:46:15.055411] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:16.942 #17 NEW cov: 12186 ft: 13259 corp: 5/349b lim: 320 exec/s: 0 rss: 71Mb L: 91/91 MS: 1 ChangeBinInt- 00:07:16.942 [2024-07-24 22:46:15.125759] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.942 [2024-07-24 22:46:15.125786] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.200 #18 NEW cov: 12186 ft: 13355 corp: 6/440b lim: 320 exec/s: 0 rss: 72Mb L: 91/91 MS: 1 ChangeByte- 00:07:17.200 [2024-07-24 22:46:15.185864] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0f000000 cdw11:00000000 00:07:17.201 [2024-07-24 22:46:15.185888] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.201 #19 NEW cov: 12186 ft: 13450 corp: 7/525b lim: 320 exec/s: 0 rss: 72Mb L: 85/91 MS: 1 CopyPart- 00:07:17.201 [2024-07-24 22:46:15.246035] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.201 [2024-07-24 22:46:15.246060] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.201 #20 NEW cov: 12186 ft: 13544 corp: 8/617b lim: 320 exec/s: 0 rss: 72Mb L: 92/92 MS: 1 InsertByte- 00:07:17.201 [2024-07-24 22:46:15.296352] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.201 [2024-07-24 22:46:15.296378] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.201 #21 NEW cov: 12186 ft: 13559 corp: 9/708b lim: 320 exec/s: 0 rss: 72Mb L: 91/92 MS: 1 ShuffleBytes- 00:07:17.201 [2024-07-24 22:46:15.346464] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:0000000f 00:07:17.201 [2024-07-24 22:46:15.346487] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.201 #22 NEW cov: 12186 ft: 13634 corp: 10/794b lim: 320 exec/s: 0 rss: 72Mb L: 86/92 MS: 1 InsertByte- 00:07:17.459 [2024-07-24 22:46:15.406792] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.459 [2024-07-24 22:46:15.406817] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.459 #23 NEW cov: 12186 ft: 13689 corp: 11/890b lim: 320 exec/s: 0 rss: 72Mb L: 96/96 MS: 1 CMP- DE: "\000\000\000\000"- 00:07:17.459 [2024-07-24 22:46:15.466944] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.459 [2024-07-24 22:46:15.466969] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.459 #24 NEW cov: 12186 ft: 13719 corp: 12/981b lim: 320 exec/s: 0 rss: 72Mb L: 91/96 MS: 1 CopyPart- 00:07:17.459 [2024-07-24 22:46:15.527173] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:68686868 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.459 [2024-07-24 22:46:15.527196] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.460 #25 NEW cov: 12186 ft: 13762 corp: 13/1052b lim: 320 exec/s: 0 rss: 72Mb L: 71/96 MS: 1 EraseBytes- 00:07:17.460 [2024-07-24 22:46:15.577492] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000f0000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.460 [2024-07-24 22:46:15.577516] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.460 #26 NEW cov: 12186 ft: 13831 corp: 14/1143b lim: 320 exec/s: 0 rss: 72Mb L: 91/96 MS: 1 PersAutoDict- DE: "\017\000\000\000"- 00:07:17.460 [2024-07-24 22:46:15.627663] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.460 [2024-07-24 22:46:15.627686] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.460 NEW_FUNC[1/1]: 0x1a887d0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:07:17.460 #27 NEW cov: 12203 ft: 13887 corp: 15/1234b lim: 320 exec/s: 0 rss: 72Mb L: 91/96 MS: 1 ChangeBinInt- 00:07:17.718 [2024-07-24 22:46:15.677975] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000f0000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.718 [2024-07-24 22:46:15.677999] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.718 #28 NEW cov: 12203 ft: 13901 corp: 16/1326b lim: 320 exec/s: 28 rss: 72Mb L: 92/96 MS: 1 InsertByte- 00:07:17.718 [2024-07-24 22:46:15.738106] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.718 [2024-07-24 22:46:15.738129] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.718 #29 NEW cov: 12203 ft: 13959 corp: 17/1417b lim: 320 exec/s: 29 rss: 72Mb L: 91/96 MS: 1 ChangeBinInt- 00:07:17.718 [2024-07-24 22:46:15.798376] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.718 [2024-07-24 22:46:15.798399] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.718 #30 NEW cov: 12203 ft: 13992 corp: 18/1508b lim: 320 exec/s: 30 rss: 72Mb L: 91/96 MS: 1 ShuffleBytes- 00:07:17.718 [2024-07-24 22:46:15.848449] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.718 [2024-07-24 22:46:15.848472] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.718 #31 NEW cov: 12203 ft: 13999 corp: 19/1599b lim: 320 exec/s: 31 rss: 72Mb L: 91/96 MS: 1 ShuffleBytes- 00:07:17.718 [2024-07-24 22:46:15.898674] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.718 [2024-07-24 22:46:15.898697] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.002 #32 NEW cov: 12203 ft: 14012 corp: 20/1696b lim: 320 exec/s: 32 rss: 72Mb L: 97/97 MS: 1 CrossOver- 00:07:18.002 [2024-07-24 22:46:15.958909] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.002 [2024-07-24 22:46:15.958933] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.002 #33 NEW cov: 12203 ft: 14073 corp: 21/1793b lim: 320 exec/s: 33 rss: 72Mb L: 97/97 MS: 1 ChangeByte- 00:07:18.002 [2024-07-24 22:46:16.019127] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.002 [2024-07-24 22:46:16.019151] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.002 #34 NEW cov: 12203 ft: 14084 corp: 22/1884b lim: 320 exec/s: 34 rss: 72Mb L: 91/97 MS: 1 ChangeByte- 00:07:18.002 [2024-07-24 22:46:16.079341] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.002 [2024-07-24 22:46:16.079363] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.002 #35 NEW cov: 12203 ft: 14090 corp: 23/1975b lim: 320 exec/s: 35 rss: 73Mb L: 91/97 MS: 1 ChangeBit- 00:07:18.002 [2024-07-24 22:46:16.139709] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.002 [2024-07-24 22:46:16.139733] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.002 #36 NEW cov: 12203 ft: 14104 corp: 24/2073b lim: 320 exec/s: 36 rss: 73Mb L: 98/98 MS: 1 InsertByte- 00:07:18.002 [2024-07-24 22:46:16.189909] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:68686868 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.002 [2024-07-24 22:46:16.189934] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.260 #37 NEW cov: 12203 ft: 14123 corp: 25/2144b lim: 320 exec/s: 37 rss: 73Mb L: 71/98 MS: 1 ChangeBinInt- 00:07:18.260 [2024-07-24 22:46:16.260509] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.260 [2024-07-24 22:46:16.260535] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.260 [2024-07-24 22:46:16.260628] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:07:18.260 [2024-07-24 22:46:16.260642] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:18.260 #38 NEW cov: 12203 ft: 14271 corp: 26/2308b lim: 320 exec/s: 38 rss: 73Mb L: 164/164 MS: 1 InsertRepeatedBytes- 00:07:18.260 [2024-07-24 22:46:16.310423] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:07:18.260 [2024-07-24 22:46:16.310448] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.260 #39 NEW cov: 12203 ft: 14279 corp: 27/2397b lim: 320 exec/s: 39 rss: 73Mb L: 89/164 MS: 1 CMP- DE: "\376\377\377\377"- 00:07:18.260 [2024-07-24 22:46:16.360521] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.260 [2024-07-24 22:46:16.360547] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.260 #40 NEW cov: 12203 ft: 14287 corp: 28/2488b lim: 320 exec/s: 40 rss: 73Mb L: 91/164 MS: 1 ChangeByte- 00:07:18.260 [2024-07-24 22:46:16.430902] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:07:18.260 [2024-07-24 22:46:16.430928] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.519 [2024-07-24 22:46:16.501182] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:07:18.519 [2024-07-24 22:46:16.501206] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.519 #42 NEW cov: 12203 ft: 14335 corp: 29/2577b lim: 320 exec/s: 42 rss: 73Mb L: 89/164 MS: 2 ChangeBinInt-ChangeBit- 00:07:18.519 [2024-07-24 22:46:16.551384] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.519 [2024-07-24 22:46:16.551407] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.519 #43 NEW cov: 12203 ft: 14341 corp: 30/2668b lim: 320 exec/s: 43 rss: 73Mb L: 91/164 MS: 1 ChangeBit- 00:07:18.519 [2024-07-24 22:46:16.601935] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.519 [2024-07-24 22:46:16.601958] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.519 [2024-07-24 22:46:16.602036] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:07:18.519 [2024-07-24 22:46:16.602049] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:18.519 #44 NEW cov: 12210 ft: 14354 corp: 31/2832b lim: 320 exec/s: 44 rss: 74Mb L: 164/164 MS: 1 CopyPart- 00:07:18.519 [2024-07-24 22:46:16.662316] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.519 [2024-07-24 22:46:16.662339] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.519 [2024-07-24 22:46:16.662423] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:07:18.519 [2024-07-24 22:46:16.662436] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:18.519 [2024-07-24 22:46:16.662516] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:68686868 cdw11:68686868 00:07:18.519 [2024-07-24 22:46:16.662529] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:18.519 #45 NEW cov: 12210 ft: 14565 corp: 32/3032b lim: 320 exec/s: 45 rss: 74Mb L: 200/200 MS: 1 InsertRepeatedBytes- 00:07:18.519 [2024-07-24 22:46:16.711981] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:07:18.519 [2024-07-24 22:46:16.712004] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.778 #46 NEW cov: 12210 ft: 14579 corp: 33/3121b lim: 320 exec/s: 23 rss: 74Mb L: 89/200 MS: 1 ChangeBinInt- 00:07:18.778 #46 DONE cov: 12210 ft: 14579 corp: 33/3121b lim: 320 exec/s: 23 rss: 74Mb 00:07:18.778 ###### Recommended dictionary. ###### 00:07:18.778 "\017\000\000\000" # Uses: 1 00:07:18.778 "\000\000\000\000" # Uses: 0 00:07:18.778 "\376\377\377\377" # Uses: 0 00:07:18.778 ###### End of recommended dictionary. ###### 00:07:18.778 Done 46 runs in 2 second(s) 00:07:18.778 22:46:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_0.conf /var/tmp/suppress_nvmf_fuzz 00:07:18.778 22:46:16 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:18.778 22:46:16 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:18.778 22:46:16 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 1 1 0x1 00:07:18.778 22:46:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=1 00:07:18.778 22:46:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:18.778 22:46:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:18.778 22:46:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 00:07:18.778 22:46:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_1.conf 00:07:18.778 22:46:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:18.778 22:46:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:18.778 22:46:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 1 00:07:18.778 22:46:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4401 00:07:18.778 22:46:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 00:07:18.778 22:46:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4401' 00:07:18.778 22:46:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4401"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:18.778 22:46:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:18.778 22:46:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:18.778 22:46:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4401' -c /tmp/fuzz_json_1.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 -Z 1 00:07:18.778 [2024-07-24 22:46:16.893569] Starting SPDK v24.09-pre git sha1 f41dbc235 / DPDK 24.03.0 initialization... 00:07:18.778 [2024-07-24 22:46:16.893651] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid478919 ] 00:07:18.778 EAL: No free 2048 kB hugepages reported on node 1 00:07:19.036 [2024-07-24 22:46:17.142916] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.036 [2024-07-24 22:46:17.222974] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.294 [2024-07-24 22:46:17.281764] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:19.294 [2024-07-24 22:46:17.298021] tcp.c:1007:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4401 *** 00:07:19.294 INFO: Running with entropic power schedule (0xFF, 100). 00:07:19.294 INFO: Seed: 3502395853 00:07:19.294 INFO: Loaded 1 modules (358985 inline 8-bit counters): 358985 [0x29c620c, 0x2a1dc55), 00:07:19.294 INFO: Loaded 1 PC tables (358985 PCs): 358985 [0x2a1dc58,0x2f980e8), 00:07:19.294 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 00:07:19.294 INFO: A corpus is not provided, starting from an empty corpus 00:07:19.294 #2 INITED exec/s: 0 rss: 63Mb 00:07:19.294 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:19.294 This may also happen if the target rejected all inputs we tried so far 00:07:19.294 [2024-07-24 22:46:17.345801] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100004141 00:07:19.294 [2024-07-24 22:46:17.345874] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100004141 00:07:19.294 [2024-07-24 22:46:17.345982] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:41418141 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.294 [2024-07-24 22:46:17.346020] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:19.294 [2024-07-24 22:46:17.346049] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:41418141 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.294 [2024-07-24 22:46:17.346063] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:19.553 NEW_FUNC[1/700]: 0x484780 in fuzz_admin_get_log_page_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:67 00:07:19.553 NEW_FUNC[2/700]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:19.553 #16 NEW cov: 11987 ft: 11969 corp: 2/16b lim: 30 exec/s: 0 rss: 70Mb L: 15/15 MS: 4 CrossOver-ChangeByte-EraseBytes-InsertRepeatedBytes- 00:07:19.553 [2024-07-24 22:46:17.536278] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100004141 00:07:19.553 [2024-07-24 22:46:17.536377] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x20000e2e2 00:07:19.553 [2024-07-24 22:46:17.536431] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x20000e2e2 00:07:19.553 [2024-07-24 22:46:17.536482] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x20000e2e2 00:07:19.553 [2024-07-24 22:46:17.536534] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000410a 00:07:19.553 [2024-07-24 22:46:17.536639] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:41418141 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.553 [2024-07-24 22:46:17.536661] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:19.553 [2024-07-24 22:46:17.536691] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:41410241 cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.553 [2024-07-24 22:46:17.536704] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:19.553 [2024-07-24 22:46:17.536730] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:e2e202e2 cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.553 [2024-07-24 22:46:17.536743] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:19.553 [2024-07-24 22:46:17.536768] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:e2e202e2 cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.553 [2024-07-24 22:46:17.536780] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:19.553 [2024-07-24 22:46:17.536805] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:8 nsid:0 cdw10:41418141 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.553 [2024-07-24 22:46:17.536817] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:19.553 NEW_FUNC[1/1]: 0x101c770 in posix_sock_read /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/module/sock/posix/posix.c:1496 00:07:19.553 #17 NEW cov: 12155 ft: 13192 corp: 3/46b lim: 30 exec/s: 0 rss: 71Mb L: 30/30 MS: 1 InsertRepeatedBytes- 00:07:19.553 [2024-07-24 22:46:17.626410] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100004141 00:07:19.553 [2024-07-24 22:46:17.626542] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:41418141 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.553 [2024-07-24 22:46:17.626562] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:19.553 #18 NEW cov: 12161 ft: 13917 corp: 4/54b lim: 30 exec/s: 0 rss: 71Mb L: 8/30 MS: 1 EraseBytes- 00:07:19.553 [2024-07-24 22:46:17.697294] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100004141 00:07:19.553 [2024-07-24 22:46:17.697427] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100004141 00:07:19.553 [2024-07-24 22:46:17.697666] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:41418141 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.553 [2024-07-24 22:46:17.697703] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:19.553 [2024-07-24 22:46:17.697772] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:41418141 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.553 [2024-07-24 22:46:17.697790] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:19.553 #19 NEW cov: 12246 ft: 14276 corp: 5/68b lim: 30 exec/s: 0 rss: 71Mb L: 14/30 MS: 1 EraseBytes- 00:07:19.553 [2024-07-24 22:46:17.737293] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100004141 00:07:19.553 [2024-07-24 22:46:17.737510] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a418141 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.553 [2024-07-24 22:46:17.737533] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:19.812 #20 NEW cov: 12246 ft: 14387 corp: 6/77b lim: 30 exec/s: 0 rss: 71Mb L: 9/30 MS: 1 CrossOver- 00:07:19.812 [2024-07-24 22:46:17.777408] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x4141 00:07:19.812 [2024-07-24 22:46:17.777528] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100004141 00:07:19.812 [2024-07-24 22:46:17.777739] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:41410041 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.812 [2024-07-24 22:46:17.777763] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:19.812 [2024-07-24 22:46:17.777814] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:41418141 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.812 [2024-07-24 22:46:17.777825] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:19.812 #21 NEW cov: 12246 ft: 14497 corp: 7/91b lim: 30 exec/s: 0 rss: 71Mb L: 14/30 MS: 1 ChangeBit- 00:07:19.812 [2024-07-24 22:46:17.827552] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000410a 00:07:19.812 [2024-07-24 22:46:17.827786] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a418141 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.812 [2024-07-24 22:46:17.827808] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:19.812 #22 NEW cov: 12246 ft: 14551 corp: 8/97b lim: 30 exec/s: 0 rss: 71Mb L: 6/30 MS: 1 EraseBytes- 00:07:19.812 [2024-07-24 22:46:17.877685] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000410a 00:07:19.812 [2024-07-24 22:46:17.877910] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:e8418141 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.812 [2024-07-24 22:46:17.877933] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:19.812 #23 NEW cov: 12246 ft: 14678 corp: 9/103b lim: 30 exec/s: 0 rss: 71Mb L: 6/30 MS: 1 ChangeByte- 00:07:19.812 [2024-07-24 22:46:17.927870] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100004141 00:07:19.812 [2024-07-24 22:46:17.928007] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100004141 00:07:19.812 [2024-07-24 22:46:17.928224] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:41418141 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.812 [2024-07-24 22:46:17.928246] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:19.812 [2024-07-24 22:46:17.928299] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:41418141 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.812 [2024-07-24 22:46:17.928310] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:19.812 #24 NEW cov: 12246 ft: 14743 corp: 10/117b lim: 30 exec/s: 0 rss: 71Mb L: 14/30 MS: 1 ChangeBinInt- 00:07:19.812 [2024-07-24 22:46:17.968039] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100004141 00:07:19.812 [2024-07-24 22:46:17.968184] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (328968) > buf size (4096) 00:07:19.812 [2024-07-24 22:46:17.968297] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (86356) > buf size (4096) 00:07:19.812 [2024-07-24 22:46:17.968412] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100004141 00:07:19.812 [2024-07-24 22:46:17.968624] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:41418141 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.812 [2024-07-24 22:46:17.968647] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:19.812 [2024-07-24 22:46:17.968701] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:41418141 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.812 [2024-07-24 22:46:17.968712] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:19.812 [2024-07-24 22:46:17.968761] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:54540054 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.812 [2024-07-24 22:46:17.968772] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:19.812 [2024-07-24 22:46:17.968824] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:54548154 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.812 [2024-07-24 22:46:17.968835] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:19.812 #25 NEW cov: 12269 ft: 14822 corp: 11/143b lim: 30 exec/s: 0 rss: 71Mb L: 26/30 MS: 1 InsertRepeatedBytes- 00:07:19.812 [2024-07-24 22:46:18.008068] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100004106 00:07:19.812 [2024-07-24 22:46:18.008189] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100004141 00:07:19.812 [2024-07-24 22:46:18.008394] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:41418141 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.812 [2024-07-24 22:46:18.008417] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:19.812 [2024-07-24 22:46:18.008471] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:41418141 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.812 [2024-07-24 22:46:18.008484] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:20.071 #26 NEW cov: 12269 ft: 14895 corp: 12/158b lim: 30 exec/s: 0 rss: 71Mb L: 15/30 MS: 1 InsertByte- 00:07:20.071 [2024-07-24 22:46:18.048186] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x4141 00:07:20.071 [2024-07-24 22:46:18.048320] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100004141 00:07:20.071 [2024-07-24 22:46:18.048535] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:41410041 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.071 [2024-07-24 22:46:18.048557] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:20.071 [2024-07-24 22:46:18.048611] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:41418141 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.071 [2024-07-24 22:46:18.048623] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:20.071 #27 NEW cov: 12269 ft: 14952 corp: 13/172b lim: 30 exec/s: 0 rss: 72Mb L: 14/30 MS: 1 CrossOver- 00:07:20.071 [2024-07-24 22:46:18.098345] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100004141 00:07:20.071 [2024-07-24 22:46:18.098483] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100004141 00:07:20.071 [2024-07-24 22:46:18.098699] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:41418141 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.071 [2024-07-24 22:46:18.098722] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:20.071 [2024-07-24 22:46:18.098776] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:410a8141 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.071 [2024-07-24 22:46:18.098788] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:20.071 #28 NEW cov: 12269 ft: 15004 corp: 14/188b lim: 30 exec/s: 0 rss: 72Mb L: 16/30 MS: 1 CopyPart- 00:07:20.071 [2024-07-24 22:46:18.138401] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100004141 00:07:20.071 [2024-07-24 22:46:18.138638] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:415b8141 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.071 [2024-07-24 22:46:18.138662] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:20.071 #29 NEW cov: 12269 ft: 15059 corp: 15/197b lim: 30 exec/s: 0 rss: 72Mb L: 9/30 MS: 1 InsertByte- 00:07:20.071 [2024-07-24 22:46:18.188571] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000410a 00:07:20.071 [2024-07-24 22:46:18.188790] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a418141 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.071 [2024-07-24 22:46:18.188813] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:20.071 #30 NEW cov: 12269 ft: 15074 corp: 16/203b lim: 30 exec/s: 0 rss: 72Mb L: 6/30 MS: 1 CopyPart- 00:07:20.071 [2024-07-24 22:46:18.228805] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (328968) > buf size (4096) 00:07:20.071 [2024-07-24 22:46:18.229264] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000410a 00:07:20.071 [2024-07-24 22:46:18.229490] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:41418141 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.071 [2024-07-24 22:46:18.229512] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:20.071 [2024-07-24 22:46:18.229568] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.071 [2024-07-24 22:46:18.229579] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:20.071 [2024-07-24 22:46:18.229630] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.071 [2024-07-24 22:46:18.229641] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:20.071 [2024-07-24 22:46:18.229695] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.071 [2024-07-24 22:46:18.229706] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:20.071 [2024-07-24 22:46:18.229756] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:8 nsid:0 cdw10:00008100 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.071 [2024-07-24 22:46:18.229766] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:20.071 NEW_FUNC[1/1]: 0x1a887d0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:07:20.071 #31 NEW cov: 12303 ft: 15143 corp: 17/233b lim: 30 exec/s: 0 rss: 72Mb L: 30/30 MS: 1 InsertRepeatedBytes- 00:07:20.071 [2024-07-24 22:46:18.268917] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (328968) > buf size (4096) 00:07:20.071 [2024-07-24 22:46:18.269171] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x20000ffff 00:07:20.071 [2024-07-24 22:46:18.269287] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (261124) > buf size (4096) 00:07:20.071 [2024-07-24 22:46:18.269400] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000410a 00:07:20.071 [2024-07-24 22:46:18.269624] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:41418141 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.071 [2024-07-24 22:46:18.269647] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:20.071 [2024-07-24 22:46:18.269702] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.071 [2024-07-24 22:46:18.269713] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:20.071 [2024-07-24 22:46:18.269765] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:00000200 cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.071 [2024-07-24 22:46:18.269776] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:20.071 [2024-07-24 22:46:18.269829] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:ff000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.071 [2024-07-24 22:46:18.269840] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:20.071 [2024-07-24 22:46:18.269891] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:8 nsid:0 cdw10:00008100 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.071 [2024-07-24 22:46:18.269902] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:20.329 #32 NEW cov: 12303 ft: 15198 corp: 18/263b lim: 30 exec/s: 0 rss: 72Mb L: 30/30 MS: 1 CMP- DE: "\376\377\377\377"- 00:07:20.329 [2024-07-24 22:46:18.318959] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (499976) > buf size (4096) 00:07:20.329 [2024-07-24 22:46:18.319196] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:e8418141 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.329 [2024-07-24 22:46:18.319222] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:20.329 #33 NEW cov: 12303 ft: 15238 corp: 19/269b lim: 30 exec/s: 33 rss: 72Mb L: 6/30 MS: 1 ChangeBit- 00:07:20.329 [2024-07-24 22:46:18.369126] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100004141 00:07:20.329 [2024-07-24 22:46:18.369360] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a418141 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.329 [2024-07-24 22:46:18.369383] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:20.329 #34 NEW cov: 12303 ft: 15258 corp: 20/278b lim: 30 exec/s: 34 rss: 72Mb L: 9/30 MS: 1 ChangeBinInt- 00:07:20.329 [2024-07-24 22:46:18.409206] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100004141 00:07:20.330 [2024-07-24 22:46:18.409430] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a418141 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.330 [2024-07-24 22:46:18.409453] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:20.330 #35 NEW cov: 12303 ft: 15283 corp: 21/288b lim: 30 exec/s: 35 rss: 72Mb L: 10/30 MS: 1 InsertByte- 00:07:20.330 [2024-07-24 22:46:18.459370] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000410a 00:07:20.330 [2024-07-24 22:46:18.459489] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100004141 00:07:20.330 [2024-07-24 22:46:18.459702] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:41418141 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.330 [2024-07-24 22:46:18.459724] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:20.330 [2024-07-24 22:46:18.459778] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:41418141 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.330 [2024-07-24 22:46:18.459789] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:20.330 #36 NEW cov: 12303 ft: 15301 corp: 22/304b lim: 30 exec/s: 36 rss: 72Mb L: 16/30 MS: 1 ShuffleBytes- 00:07:20.330 [2024-07-24 22:46:18.509633] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100004141 00:07:20.330 [2024-07-24 22:46:18.509757] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x20000e2e2 00:07:20.330 [2024-07-24 22:46:18.509872] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x20000e2e2 00:07:20.330 [2024-07-24 22:46:18.509987] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x20000e2e2 00:07:20.330 [2024-07-24 22:46:18.510105] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000410a 00:07:20.330 [2024-07-24 22:46:18.510327] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:41418141 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.330 [2024-07-24 22:46:18.510351] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:20.330 [2024-07-24 22:46:18.510404] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:41410241 cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.330 [2024-07-24 22:46:18.510415] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:20.330 [2024-07-24 22:46:18.510468] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:e2e202e2 cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.330 [2024-07-24 22:46:18.510479] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:20.330 [2024-07-24 22:46:18.510532] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:e29a02e2 cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.330 [2024-07-24 22:46:18.510543] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:20.330 [2024-07-24 22:46:18.510594] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:8 nsid:0 cdw10:41418141 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.330 [2024-07-24 22:46:18.510605] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:20.588 #37 NEW cov: 12303 ft: 15325 corp: 23/334b lim: 30 exec/s: 37 rss: 72Mb L: 30/30 MS: 1 ChangeByte- 00:07:20.588 [2024-07-24 22:46:18.559631] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100004141 00:07:20.588 [2024-07-24 22:46:18.559752] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100004141 00:07:20.588 [2024-07-24 22:46:18.559967] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:41418140 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.588 [2024-07-24 22:46:18.559989] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:20.588 [2024-07-24 22:46:18.560042] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:41418141 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.588 [2024-07-24 22:46:18.560053] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:20.588 #38 NEW cov: 12303 ft: 15336 corp: 24/348b lim: 30 exec/s: 38 rss: 72Mb L: 14/30 MS: 1 ShuffleBytes- 00:07:20.588 [2024-07-24 22:46:18.609807] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100004141 00:07:20.588 [2024-07-24 22:46:18.609930] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100004141 00:07:20.588 [2024-07-24 22:46:18.610147] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:41418141 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.588 [2024-07-24 22:46:18.610170] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:20.588 [2024-07-24 22:46:18.610223] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:41418141 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.588 [2024-07-24 22:46:18.610234] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:20.588 #39 NEW cov: 12303 ft: 15385 corp: 25/363b lim: 30 exec/s: 39 rss: 72Mb L: 15/30 MS: 1 ChangeBit- 00:07:20.588 [2024-07-24 22:46:18.649926] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100004141 00:07:20.588 [2024-07-24 22:46:18.650059] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100002141 00:07:20.588 [2024-07-24 22:46:18.650186] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000410a 00:07:20.588 [2024-07-24 22:46:18.650416] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:41418141 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.588 [2024-07-24 22:46:18.650439] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:20.588 [2024-07-24 22:46:18.650490] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:41218121 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.588 [2024-07-24 22:46:18.650502] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:20.588 [2024-07-24 22:46:18.650552] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:41418141 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.588 [2024-07-24 22:46:18.650564] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:20.588 #45 NEW cov: 12303 ft: 15635 corp: 26/381b lim: 30 exec/s: 45 rss: 72Mb L: 18/30 MS: 1 InsertRepeatedBytes- 00:07:20.588 [2024-07-24 22:46:18.689959] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100004141 00:07:20.588 [2024-07-24 22:46:18.690087] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100004141 00:07:20.588 [2024-07-24 22:46:18.690295] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:41418141 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.588 [2024-07-24 22:46:18.690318] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:20.588 [2024-07-24 22:46:18.690370] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:41458141 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.588 [2024-07-24 22:46:18.690381] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:20.588 #46 NEW cov: 12303 ft: 15639 corp: 27/396b lim: 30 exec/s: 46 rss: 72Mb L: 15/30 MS: 1 ChangeBit- 00:07:20.588 [2024-07-24 22:46:18.740122] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100003141 00:07:20.588 [2024-07-24 22:46:18.740238] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100004141 00:07:20.588 [2024-07-24 22:46:18.740460] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:41418141 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.588 [2024-07-24 22:46:18.740482] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:20.588 [2024-07-24 22:46:18.740534] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:41418141 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.588 [2024-07-24 22:46:18.740545] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:20.588 #47 NEW cov: 12303 ft: 15646 corp: 28/412b lim: 30 exec/s: 47 rss: 72Mb L: 16/30 MS: 1 InsertByte- 00:07:20.588 [2024-07-24 22:46:18.780310] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100004141 00:07:20.588 [2024-07-24 22:46:18.780449] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x20000e2e2 00:07:20.588 [2024-07-24 22:46:18.780566] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x20000e2e2 00:07:20.588 [2024-07-24 22:46:18.780678] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x20000e2e2 00:07:20.589 [2024-07-24 22:46:18.780794] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000410a 00:07:20.589 [2024-07-24 22:46:18.781012] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:41418141 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.589 [2024-07-24 22:46:18.781034] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:20.589 [2024-07-24 22:46:18.781089] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:41410241 cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.589 [2024-07-24 22:46:18.781101] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:20.589 [2024-07-24 22:46:18.781153] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:e2e202e2 cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.589 [2024-07-24 22:46:18.781165] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:20.589 [2024-07-24 22:46:18.781217] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:e2e202e2 cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.589 [2024-07-24 22:46:18.781228] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:20.589 [2024-07-24 22:46:18.781291] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:8 nsid:0 cdw10:41418141 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.589 [2024-07-24 22:46:18.781302] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:20.848 #48 NEW cov: 12303 ft: 15693 corp: 29/442b lim: 30 exec/s: 48 rss: 72Mb L: 30/30 MS: 1 ShuffleBytes- 00:07:20.848 [2024-07-24 22:46:18.820328] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100004141 00:07:20.848 [2024-07-24 22:46:18.820453] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000feff 00:07:20.848 [2024-07-24 22:46:18.820674] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:41418141 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.848 [2024-07-24 22:46:18.820697] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:20.848 [2024-07-24 22:46:18.820750] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:41418141 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.848 [2024-07-24 22:46:18.820761] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:20.848 #49 NEW cov: 12303 ft: 15710 corp: 30/456b lim: 30 exec/s: 49 rss: 72Mb L: 14/30 MS: 1 PersAutoDict- DE: "\376\377\377\377"- 00:07:20.848 [2024-07-24 22:46:18.870513] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100004141 00:07:20.848 [2024-07-24 22:46:18.870749] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0b0a8141 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.848 [2024-07-24 22:46:18.870772] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:20.848 #51 NEW cov: 12303 ft: 15719 corp: 31/467b lim: 30 exec/s: 51 rss: 72Mb L: 11/30 MS: 2 ChangeBit-CrossOver- 00:07:20.848 [2024-07-24 22:46:18.910561] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000410a 00:07:20.848 [2024-07-24 22:46:18.910796] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:e84181c9 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.848 [2024-07-24 22:46:18.910819] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:20.848 #52 NEW cov: 12303 ft: 15724 corp: 32/473b lim: 30 exec/s: 52 rss: 72Mb L: 6/30 MS: 1 ChangeBinInt- 00:07:20.848 [2024-07-24 22:46:18.950752] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100004141 00:07:20.848 [2024-07-24 22:46:18.950877] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100002141 00:07:20.848 [2024-07-24 22:46:18.950988] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300004141 00:07:20.848 [2024-07-24 22:46:18.951203] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:41418141 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.848 [2024-07-24 22:46:18.951224] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:20.848 [2024-07-24 22:46:18.951279] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:41218121 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.848 [2024-07-24 22:46:18.951289] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:20.848 [2024-07-24 22:46:18.951340] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:41418341 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.848 [2024-07-24 22:46:18.951350] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:20.848 #53 NEW cov: 12303 ft: 15729 corp: 33/492b lim: 30 exec/s: 53 rss: 72Mb L: 19/30 MS: 1 InsertByte- 00:07:20.848 [2024-07-24 22:46:19.000834] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100004141 00:07:20.848 [2024-07-24 22:46:19.001053] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:41e88141 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.848 [2024-07-24 22:46:19.001080] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:20.848 #54 NEW cov: 12303 ft: 15740 corp: 34/501b lim: 30 exec/s: 54 rss: 72Mb L: 9/30 MS: 1 CrossOver- 00:07:20.848 [2024-07-24 22:46:19.041019] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (328968) > buf size (4096) 00:07:20.848 [2024-07-24 22:46:19.041267] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x20000fffe 00:07:20.848 [2024-07-24 22:46:19.041385] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (262144) > buf size (4096) 00:07:20.848 [2024-07-24 22:46:19.041500] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000410a 00:07:20.848 [2024-07-24 22:46:19.041718] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:41418141 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.848 [2024-07-24 22:46:19.041741] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:20.848 [2024-07-24 22:46:19.041793] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.848 [2024-07-24 22:46:19.041806] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:20.848 [2024-07-24 22:46:19.041858] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:00000200 cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.848 [2024-07-24 22:46:19.041869] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:20.848 [2024-07-24 22:46:19.041920] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:ffff00ff cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.848 [2024-07-24 22:46:19.041931] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:20.848 [2024-07-24 22:46:19.041985] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:8 nsid:0 cdw10:00008100 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.848 [2024-07-24 22:46:19.041996] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:21.106 #55 NEW cov: 12303 ft: 15751 corp: 35/531b lim: 30 exec/s: 55 rss: 73Mb L: 30/30 MS: 1 PersAutoDict- DE: "\376\377\377\377"- 00:07:21.106 [2024-07-24 22:46:19.091088] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x3741 00:07:21.106 [2024-07-24 22:46:19.091215] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100004141 00:07:21.106 [2024-07-24 22:46:19.091432] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:41410041 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.106 [2024-07-24 22:46:19.091455] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:21.106 [2024-07-24 22:46:19.091506] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:41418141 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.106 [2024-07-24 22:46:19.091517] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:21.106 #56 NEW cov: 12303 ft: 15784 corp: 36/545b lim: 30 exec/s: 56 rss: 73Mb L: 14/30 MS: 1 ChangeBinInt- 00:07:21.106 [2024-07-24 22:46:19.131246] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100004149 00:07:21.106 [2024-07-24 22:46:19.131374] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200004141 00:07:21.106 [2024-07-24 22:46:19.131490] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000410a 00:07:21.106 [2024-07-24 22:46:19.131703] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:41418141 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.106 [2024-07-24 22:46:19.131726] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:21.106 [2024-07-24 22:46:19.131776] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:41410241 cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.106 [2024-07-24 22:46:19.131788] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:21.107 [2024-07-24 22:46:19.131837] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:41418141 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.107 [2024-07-24 22:46:19.131847] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:21.107 #57 NEW cov: 12303 ft: 15804 corp: 37/563b lim: 30 exec/s: 57 rss: 73Mb L: 18/30 MS: 1 CrossOver- 00:07:21.107 [2024-07-24 22:46:19.171289] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100004141 00:07:21.107 [2024-07-24 22:46:19.171524] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:41e881fe cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.107 [2024-07-24 22:46:19.171545] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:21.107 #58 NEW cov: 12303 ft: 15810 corp: 38/573b lim: 30 exec/s: 58 rss: 73Mb L: 10/30 MS: 1 InsertByte- 00:07:21.107 [2024-07-24 22:46:19.221496] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100004141 00:07:21.107 [2024-07-24 22:46:19.221620] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (328968) > buf size (4096) 00:07:21.107 [2024-07-24 22:46:19.221736] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (86356) > buf size (4096) 00:07:21.107 [2024-07-24 22:46:19.221960] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:41418141 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.107 [2024-07-24 22:46:19.221982] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:21.107 [2024-07-24 22:46:19.222035] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:41418141 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.107 [2024-07-24 22:46:19.222046] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:21.107 [2024-07-24 22:46:19.222114] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:54540054 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.107 [2024-07-24 22:46:19.222126] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:21.107 #59 NEW cov: 12310 ft: 15828 corp: 39/596b lim: 30 exec/s: 59 rss: 73Mb L: 23/30 MS: 1 EraseBytes- 00:07:21.107 [2024-07-24 22:46:19.271551] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100004141 00:07:21.107 [2024-07-24 22:46:19.271765] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:41418141 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.107 [2024-07-24 22:46:19.271787] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:21.107 #65 NEW cov: 12310 ft: 15842 corp: 40/606b lim: 30 exec/s: 65 rss: 73Mb L: 10/30 MS: 1 EraseBytes- 00:07:21.107 [2024-07-24 22:46:19.311844] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100004141 00:07:21.107 [2024-07-24 22:46:19.311969] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x20000e2e2 00:07:21.107 [2024-07-24 22:46:19.312089] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x20000e2e2 00:07:21.107 [2024-07-24 22:46:19.312211] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x20000e2e2 00:07:21.365 [2024-07-24 22:46:19.312331] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000410a 00:07:21.365 [2024-07-24 22:46:19.312556] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:41418141 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.365 [2024-07-24 22:46:19.312579] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:21.365 [2024-07-24 22:46:19.312631] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:41410241 cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.365 [2024-07-24 22:46:19.312642] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:21.366 [2024-07-24 22:46:19.312695] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:e2e202e2 cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.366 [2024-07-24 22:46:19.312706] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:21.366 [2024-07-24 22:46:19.312757] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:e2f202e2 cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.366 [2024-07-24 22:46:19.312769] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:21.366 [2024-07-24 22:46:19.312818] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:8 nsid:0 cdw10:41418141 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.366 [2024-07-24 22:46:19.312829] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:21.366 #66 NEW cov: 12310 ft: 15859 corp: 41/636b lim: 30 exec/s: 33 rss: 73Mb L: 30/30 MS: 1 ChangeBit- 00:07:21.366 #66 DONE cov: 12310 ft: 15859 corp: 41/636b lim: 30 exec/s: 33 rss: 73Mb 00:07:21.366 ###### Recommended dictionary. ###### 00:07:21.366 "\376\377\377\377" # Uses: 2 00:07:21.366 ###### End of recommended dictionary. ###### 00:07:21.366 Done 66 runs in 2 second(s) 00:07:21.366 22:46:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_1.conf /var/tmp/suppress_nvmf_fuzz 00:07:21.366 22:46:19 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:21.366 22:46:19 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:21.366 22:46:19 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 2 1 0x1 00:07:21.366 22:46:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=2 00:07:21.366 22:46:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:21.366 22:46:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:21.366 22:46:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 00:07:21.366 22:46:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_2.conf 00:07:21.366 22:46:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:21.366 22:46:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:21.366 22:46:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 2 00:07:21.366 22:46:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4402 00:07:21.366 22:46:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 00:07:21.366 22:46:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4402' 00:07:21.366 22:46:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4402"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:21.366 22:46:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:21.366 22:46:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:21.366 22:46:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4402' -c /tmp/fuzz_json_2.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 -Z 2 00:07:21.366 [2024-07-24 22:46:19.494752] Starting SPDK v24.09-pre git sha1 f41dbc235 / DPDK 24.03.0 initialization... 00:07:21.366 [2024-07-24 22:46:19.494816] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid479357 ] 00:07:21.366 EAL: No free 2048 kB hugepages reported on node 1 00:07:21.624 [2024-07-24 22:46:19.744354] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.624 [2024-07-24 22:46:19.827734] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.882 [2024-07-24 22:46:19.886338] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:21.882 [2024-07-24 22:46:19.902564] tcp.c:1007:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4402 *** 00:07:21.882 INFO: Running with entropic power schedule (0xFF, 100). 00:07:21.882 INFO: Seed: 1814422617 00:07:21.882 INFO: Loaded 1 modules (358985 inline 8-bit counters): 358985 [0x29c620c, 0x2a1dc55), 00:07:21.882 INFO: Loaded 1 PC tables (358985 PCs): 358985 [0x2a1dc58,0x2f980e8), 00:07:21.882 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 00:07:21.882 INFO: A corpus is not provided, starting from an empty corpus 00:07:21.882 #2 INITED exec/s: 0 rss: 64Mb 00:07:21.882 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:21.882 This may also happen if the target rejected all inputs we tried so far 00:07:21.882 [2024-07-24 22:46:19.957927] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0000001e cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.882 [2024-07-24 22:46:19.957959] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.140 NEW_FUNC[1/700]: 0x487230 in fuzz_admin_identify_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:95 00:07:22.140 NEW_FUNC[2/700]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:22.140 #24 NEW cov: 11998 ft: 11988 corp: 2/11b lim: 35 exec/s: 0 rss: 71Mb L: 10/10 MS: 2 InsertByte-CMP- DE: "\000\000\000\000\000\000\000?"- 00:07:22.140 [2024-07-24 22:46:20.108412] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:22.140 [2024-07-24 22:46:20.108731] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:1e000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.140 [2024-07-24 22:46:20.108790] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.140 #25 NEW cov: 12120 ft: 12608 corp: 3/21b lim: 35 exec/s: 0 rss: 71Mb L: 10/10 MS: 1 ShuffleBytes- 00:07:22.140 [2024-07-24 22:46:20.168348] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:6363000a cdw11:63006363 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.140 [2024-07-24 22:46:20.168373] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.140 #26 NEW cov: 12126 ft: 12951 corp: 4/28b lim: 35 exec/s: 0 rss: 71Mb L: 7/10 MS: 1 InsertRepeatedBytes- 00:07:22.140 [2024-07-24 22:46:20.208464] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:22.140 [2024-07-24 22:46:20.208843] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.140 [2024-07-24 22:46:20.208871] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.140 [2024-07-24 22:46:20.208922] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:0a63003f cdw11:63006363 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.140 [2024-07-24 22:46:20.208933] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:22.140 #27 NEW cov: 12211 ft: 13600 corp: 5/43b lim: 35 exec/s: 0 rss: 71Mb L: 15/15 MS: 1 PersAutoDict- DE: "\000\000\000\000\000\000\000?"- 00:07:22.140 [2024-07-24 22:46:20.258444] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:22.140 [2024-07-24 22:46:20.258766] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.140 [2024-07-24 22:46:20.258791] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.140 [2024-07-24 22:46:20.258843] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:0a63003f cdw11:63005c63 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.140 [2024-07-24 22:46:20.258855] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:22.140 #28 NEW cov: 12211 ft: 13694 corp: 6/58b lim: 35 exec/s: 0 rss: 71Mb L: 15/15 MS: 1 ChangeBinInt- 00:07:22.140 [2024-07-24 22:46:20.308889] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:001d001e cdw11:a2002ab3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.140 [2024-07-24 22:46:20.308912] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.140 [2024-07-24 22:46:20.308980] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:17000063 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.140 [2024-07-24 22:46:20.308992] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:22.140 #29 NEW cov: 12211 ft: 13774 corp: 7/76b lim: 35 exec/s: 0 rss: 71Mb L: 18/18 MS: 1 CMP- DE: "\035*\263\242\214c\027\000"- 00:07:22.398 [2024-07-24 22:46:20.348808] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:22.398 [2024-07-24 22:46:20.348928] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:22.398 [2024-07-24 22:46:20.349149] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0000001e cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.398 [2024-07-24 22:46:20.349173] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.398 [2024-07-24 22:46:20.349226] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:3f0a0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.398 [2024-07-24 22:46:20.349240] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:22.398 [2024-07-24 22:46:20.349291] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:3f0a0000 cdw11:63006363 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.398 [2024-07-24 22:46:20.349306] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:22.398 #30 NEW cov: 12211 ft: 14053 corp: 8/98b lim: 35 exec/s: 0 rss: 71Mb L: 22/22 MS: 1 CrossOver- 00:07:22.398 [2024-07-24 22:46:20.388918] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:22.398 [2024-07-24 22:46:20.389057] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:22.398 [2024-07-24 22:46:20.389280] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0000001e cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.398 [2024-07-24 22:46:20.389303] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.398 [2024-07-24 22:46:20.389355] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:3f0a0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.398 [2024-07-24 22:46:20.389367] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:22.398 [2024-07-24 22:46:20.389419] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:3f0a0000 cdw11:25006363 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.398 [2024-07-24 22:46:20.389432] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:22.398 #31 NEW cov: 12211 ft: 14081 corp: 9/120b lim: 35 exec/s: 0 rss: 72Mb L: 22/22 MS: 1 ChangeByte- 00:07:22.398 [2024-07-24 22:46:20.438921] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:22.398 [2024-07-24 22:46:20.439137] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:3b1e0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.398 [2024-07-24 22:46:20.439160] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.398 #32 NEW cov: 12211 ft: 14097 corp: 10/131b lim: 35 exec/s: 0 rss: 72Mb L: 11/22 MS: 1 InsertByte- 00:07:22.398 [2024-07-24 22:46:20.489206] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:22.398 [2024-07-24 22:46:20.489453] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:001d001e cdw11:a2002ab3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.398 [2024-07-24 22:46:20.489475] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.398 [2024-07-24 22:46:20.489527] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:63000000 cdw11:00001700 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.398 [2024-07-24 22:46:20.489540] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:22.399 #33 NEW cov: 12211 ft: 14160 corp: 11/149b lim: 35 exec/s: 0 rss: 72Mb L: 18/22 MS: 1 ShuffleBytes- 00:07:22.399 [2024-07-24 22:46:20.539383] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:001d001e cdw11:a2002ab3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.399 [2024-07-24 22:46:20.539405] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.399 #34 NEW cov: 12211 ft: 14208 corp: 12/161b lim: 35 exec/s: 0 rss: 72Mb L: 12/22 MS: 1 EraseBytes- 00:07:22.399 [2024-07-24 22:46:20.579400] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:22.399 [2024-07-24 22:46:20.579634] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:001d001e cdw11:a2002ab3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.399 [2024-07-24 22:46:20.579658] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.399 [2024-07-24 22:46:20.579713] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:63000000 cdw11:00001700 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.399 [2024-07-24 22:46:20.579726] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:22.657 #35 NEW cov: 12211 ft: 14222 corp: 13/179b lim: 35 exec/s: 0 rss: 72Mb L: 18/22 MS: 1 ChangeBinInt- 00:07:22.657 [2024-07-24 22:46:20.629620] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:001d001e cdw11:00002a17 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.657 [2024-07-24 22:46:20.629645] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.657 #36 NEW cov: 12211 ft: 14236 corp: 14/191b lim: 35 exec/s: 0 rss: 72Mb L: 12/22 MS: 1 EraseBytes- 00:07:22.657 [2024-07-24 22:46:20.679697] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:22.657 [2024-07-24 22:46:20.679945] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:001d001e cdw11:a2002ab3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.657 [2024-07-24 22:46:20.679967] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.657 [2024-07-24 22:46:20.680018] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:63000000 cdw11:00001700 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.657 [2024-07-24 22:46:20.680030] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:22.657 #37 NEW cov: 12211 ft: 14272 corp: 15/209b lim: 35 exec/s: 0 rss: 72Mb L: 18/22 MS: 1 CMP- DE: "\000\000\000\030"- 00:07:22.657 [2024-07-24 22:46:20.719836] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:22.657 [2024-07-24 22:46:20.720172] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:001d001e cdw11:a2002ab3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.657 [2024-07-24 22:46:20.720195] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.657 [2024-07-24 22:46:20.720249] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:3e3e0000 cdw11:3e003e3e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.657 [2024-07-24 22:46:20.720262] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:22.657 [2024-07-24 22:46:20.720324] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:3e3e003e cdw11:17006300 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.657 [2024-07-24 22:46:20.720334] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:22.657 #38 NEW cov: 12211 ft: 14297 corp: 16/236b lim: 35 exec/s: 0 rss: 72Mb L: 27/27 MS: 1 InsertRepeatedBytes- 00:07:22.657 [2024-07-24 22:46:20.769975] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:22.657 [2024-07-24 22:46:20.770119] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:22.657 [2024-07-24 22:46:20.770341] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0000001e cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.657 [2024-07-24 22:46:20.770363] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.657 [2024-07-24 22:46:20.770417] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:3f0a0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.657 [2024-07-24 22:46:20.770430] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:22.657 [2024-07-24 22:46:20.770479] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:bf0a0000 cdw11:63006363 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.657 [2024-07-24 22:46:20.770491] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:22.657 #39 NEW cov: 12211 ft: 14314 corp: 17/258b lim: 35 exec/s: 0 rss: 72Mb L: 22/27 MS: 1 ChangeBit- 00:07:22.657 [2024-07-24 22:46:20.809976] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:22.657 [2024-07-24 22:46:20.810329] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:3b1e0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.657 [2024-07-24 22:46:20.810356] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.657 [2024-07-24 22:46:20.810411] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:b3a2002a cdw11:17008c63 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.657 [2024-07-24 22:46:20.810423] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:22.657 NEW_FUNC[1/1]: 0x1a887d0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:07:22.657 #40 NEW cov: 12234 ft: 14366 corp: 18/277b lim: 35 exec/s: 0 rss: 72Mb L: 19/27 MS: 1 PersAutoDict- DE: "\035*\263\242\214c\027\000"- 00:07:22.657 [2024-07-24 22:46:20.860417] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:001d001e cdw11:a2002ab3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.657 [2024-07-24 22:46:20.860440] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.657 [2024-07-24 22:46:20.860491] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:1d2a0063 cdw11:8c00b3a2 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.657 [2024-07-24 22:46:20.860502] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:22.915 #41 NEW cov: 12234 ft: 14380 corp: 19/297b lim: 35 exec/s: 0 rss: 72Mb L: 20/27 MS: 1 PersAutoDict- DE: "\035*\263\242\214c\027\000"- 00:07:22.915 [2024-07-24 22:46:20.910376] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:22.915 [2024-07-24 22:46:20.910606] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0a1e009a cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.915 [2024-07-24 22:46:20.910629] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.915 [2024-07-24 22:46:20.910681] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00003f0a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.915 [2024-07-24 22:46:20.910693] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:22.915 #43 NEW cov: 12234 ft: 14421 corp: 20/316b lim: 35 exec/s: 0 rss: 72Mb L: 19/27 MS: 2 InsertByte-CrossOver- 00:07:22.915 [2024-07-24 22:46:20.950360] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:22.915 [2024-07-24 22:46:20.950708] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:3b1e0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.915 [2024-07-24 22:46:20.950733] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.915 [2024-07-24 22:46:20.950788] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:b3a2002a cdw11:17008c63 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.915 [2024-07-24 22:46:20.950799] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:22.915 #44 NEW cov: 12234 ft: 14515 corp: 21/335b lim: 35 exec/s: 44 rss: 72Mb L: 19/27 MS: 1 ShuffleBytes- 00:07:22.915 [2024-07-24 22:46:21.000537] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:22.915 [2024-07-24 22:46:21.000674] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:22.915 [2024-07-24 22:46:21.000985] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:3b1e0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.915 [2024-07-24 22:46:21.001009] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.915 [2024-07-24 22:46:21.001065] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff0000 cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.915 [2024-07-24 22:46:21.001086] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:22.915 [2024-07-24 22:46:21.001140] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.915 [2024-07-24 22:46:21.001151] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:22.915 #45 NEW cov: 12234 ft: 14527 corp: 22/360b lim: 35 exec/s: 45 rss: 72Mb L: 25/27 MS: 1 InsertRepeatedBytes- 00:07:22.915 [2024-07-24 22:46:21.040728] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:22.915 [2024-07-24 22:46:21.041042] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:131d001e cdw11:a2002ab3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.915 [2024-07-24 22:46:21.041065] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.915 [2024-07-24 22:46:21.041118] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:3e3e0000 cdw11:3e003e3e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.915 [2024-07-24 22:46:21.041131] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:22.915 [2024-07-24 22:46:21.041182] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:3e3e003e cdw11:17006300 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.915 [2024-07-24 22:46:21.041193] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:22.915 #46 NEW cov: 12234 ft: 14545 corp: 23/387b lim: 35 exec/s: 46 rss: 72Mb L: 27/27 MS: 1 ChangeByte- 00:07:22.915 [2024-07-24 22:46:21.090763] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:22.915 [2024-07-24 22:46:21.090881] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:22.915 [2024-07-24 22:46:21.091091] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:17001d2a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.915 [2024-07-24 22:46:21.091131] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.915 [2024-07-24 22:46:21.091187] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:0a00003f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.915 [2024-07-24 22:46:21.091200] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:23.173 #47 NEW cov: 12234 ft: 14600 corp: 24/406b lim: 35 exec/s: 47 rss: 73Mb L: 19/27 MS: 1 CrossOver- 00:07:23.173 [2024-07-24 22:46:21.141089] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00ff001e cdw11:8d001663 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.173 [2024-07-24 22:46:21.141111] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:23.173 #48 NEW cov: 12234 ft: 14608 corp: 25/418b lim: 35 exec/s: 48 rss: 73Mb L: 12/27 MS: 1 CMP- DE: "\377\026c\215\012\244\000H"- 00:07:23.173 [2024-07-24 22:46:21.180987] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:23.173 [2024-07-24 22:46:21.181328] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:3b000000 cdw11:00001300 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.173 [2024-07-24 22:46:21.181353] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:23.173 [2024-07-24 22:46:21.181408] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:b3a2002a cdw11:17008c63 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.173 [2024-07-24 22:46:21.181422] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:23.173 #49 NEW cov: 12234 ft: 14689 corp: 26/437b lim: 35 exec/s: 49 rss: 73Mb L: 19/27 MS: 1 ChangeBinInt- 00:07:23.174 [2024-07-24 22:46:21.221452] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:001d001e cdw11:1d001e00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.174 [2024-07-24 22:46:21.221475] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:23.174 [2024-07-24 22:46:21.221545] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:a28c00b3 cdw11:a2002ab3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.174 [2024-07-24 22:46:21.221556] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:23.174 #50 NEW cov: 12234 ft: 14728 corp: 27/456b lim: 35 exec/s: 50 rss: 73Mb L: 19/27 MS: 1 CopyPart- 00:07:23.174 [2024-07-24 22:46:21.261218] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:23.174 [2024-07-24 22:46:21.261339] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:23.174 [2024-07-24 22:46:21.261453] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:23.174 [2024-07-24 22:46:21.261664] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:1e000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.174 [2024-07-24 22:46:21.261689] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:23.174 [2024-07-24 22:46:21.261743] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.174 [2024-07-24 22:46:21.261756] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:23.174 [2024-07-24 22:46:21.261806] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.174 [2024-07-24 22:46:21.261819] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:23.174 #51 NEW cov: 12234 ft: 14756 corp: 28/482b lim: 35 exec/s: 51 rss: 73Mb L: 26/27 MS: 1 InsertRepeatedBytes- 00:07:23.174 [2024-07-24 22:46:21.301572] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:23.174 [2024-07-24 22:46:21.301809] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0a1e009a cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.174 [2024-07-24 22:46:21.301830] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:23.174 [2024-07-24 22:46:21.301884] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:4a1e00b1 cdw11:17008d63 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.174 [2024-07-24 22:46:21.301895] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:23.174 [2024-07-24 22:46:21.301949] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:0a00003f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.174 [2024-07-24 22:46:21.301962] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:23.174 #52 NEW cov: 12234 ft: 14797 corp: 29/509b lim: 35 exec/s: 52 rss: 73Mb L: 27/27 MS: 1 CMP- DE: "t\261J\036\215c\027\000"- 00:07:23.174 [2024-07-24 22:46:21.351577] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:23.174 [2024-07-24 22:46:21.352118] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:3b1e0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.174 [2024-07-24 22:46:21.352142] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:23.174 [2024-07-24 22:46:21.352197] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:76760076 cdw11:76007676 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.174 [2024-07-24 22:46:21.352208] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:23.174 [2024-07-24 22:46:21.352259] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:76760076 cdw11:2a00761d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.174 [2024-07-24 22:46:21.352270] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:23.174 [2024-07-24 22:46:21.352324] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:8c6300a2 cdw11:00001700 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.174 [2024-07-24 22:46:21.352334] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:23.432 #53 NEW cov: 12234 ft: 15271 corp: 30/540b lim: 35 exec/s: 53 rss: 73Mb L: 31/31 MS: 1 InsertRepeatedBytes- 00:07:23.432 [2024-07-24 22:46:21.411989] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:001d001e cdw11:a2002ab3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.432 [2024-07-24 22:46:21.412011] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:23.432 [2024-07-24 22:46:21.412086] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:1d2a0063 cdw11:8c00b3a2 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.432 [2024-07-24 22:46:21.412098] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:23.432 #54 NEW cov: 12234 ft: 15281 corp: 31/560b lim: 35 exec/s: 54 rss: 73Mb L: 20/31 MS: 1 CopyPart- 00:07:23.432 [2024-07-24 22:46:21.461814] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:23.432 [2024-07-24 22:46:21.462049] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:1e000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.432 [2024-07-24 22:46:21.462078] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:23.432 #55 NEW cov: 12234 ft: 15289 corp: 32/570b lim: 35 exec/s: 55 rss: 73Mb L: 10/31 MS: 1 ShuffleBytes- 00:07:23.432 [2024-07-24 22:46:21.502019] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:23.432 [2024-07-24 22:46:21.502268] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:23.432 [2024-07-24 22:46:21.502710] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.432 [2024-07-24 22:46:21.502734] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:23.432 [2024-07-24 22:46:21.502788] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:003b003f cdw11:00001e00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.432 [2024-07-24 22:46:21.502800] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:23.432 [2024-07-24 22:46:21.502854] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00ff0000 cdw11:6300ff0a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.432 [2024-07-24 22:46:21.502867] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:23.432 [2024-07-24 22:46:21.502920] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.432 [2024-07-24 22:46:21.502931] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:23.432 [2024-07-24 22:46:21.502983] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:8 nsid:0 cdw10:6363005c cdw11:630063ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.432 [2024-07-24 22:46:21.502996] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:23.432 #56 NEW cov: 12234 ft: 15337 corp: 33/605b lim: 35 exec/s: 56 rss: 73Mb L: 35/35 MS: 1 CrossOver- 00:07:23.432 [2024-07-24 22:46:21.542186] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:23.432 [2024-07-24 22:46:21.542505] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:131d001e cdw11:a2002ab3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.432 [2024-07-24 22:46:21.542526] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:23.432 [2024-07-24 22:46:21.542579] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:3e3e0000 cdw11:3e003e3e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.432 [2024-07-24 22:46:21.542592] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:23.432 [2024-07-24 22:46:21.542643] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:3e3e003e cdw11:17006300 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.432 [2024-07-24 22:46:21.542654] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:23.432 #57 NEW cov: 12234 ft: 15349 corp: 34/632b lim: 35 exec/s: 57 rss: 73Mb L: 27/35 MS: 1 ChangeBinInt- 00:07:23.432 [2024-07-24 22:46:21.592314] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:23.432 [2024-07-24 22:46:21.592550] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0000001e cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.432 [2024-07-24 22:46:21.592573] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:23.432 [2024-07-24 22:46:21.592624] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:3f0a0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.432 [2024-07-24 22:46:21.592637] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:23.432 #58 NEW cov: 12234 ft: 15361 corp: 35/647b lim: 35 exec/s: 58 rss: 73Mb L: 15/35 MS: 1 EraseBytes- 00:07:23.691 [2024-07-24 22:46:21.642550] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0002001e cdw11:00002a17 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.691 [2024-07-24 22:46:21.642573] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:23.691 #59 NEW cov: 12234 ft: 15403 corp: 36/659b lim: 35 exec/s: 59 rss: 74Mb L: 12/35 MS: 1 ChangeByte- 00:07:23.691 [2024-07-24 22:46:21.692527] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:23.691 [2024-07-24 22:46:21.692649] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:23.691 [2024-07-24 22:46:21.692761] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:23.691 [2024-07-24 22:46:21.692972] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:1e000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.691 [2024-07-24 22:46:21.692996] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:23.691 [2024-07-24 22:46:21.693056] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.691 [2024-07-24 22:46:21.693069] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:23.691 [2024-07-24 22:46:21.693125] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.691 [2024-07-24 22:46:21.693137] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:23.691 #60 NEW cov: 12234 ft: 15415 corp: 37/686b lim: 35 exec/s: 60 rss: 74Mb L: 27/35 MS: 1 CopyPart- 00:07:23.691 [2024-07-24 22:46:21.742756] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:23.691 [2024-07-24 22:46:21.742892] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:23.691 [2024-07-24 22:46:21.743115] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:001d001e cdw11:a2002ab3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.691 [2024-07-24 22:46:21.743137] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:23.691 [2024-07-24 22:46:21.743192] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:63000000 cdw11:00001700 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.691 [2024-07-24 22:46:21.743205] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:23.691 [2024-07-24 22:46:21.743259] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00740000 cdw11:1e00b14a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.691 [2024-07-24 22:46:21.743272] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:23.691 #61 NEW cov: 12234 ft: 15421 corp: 38/712b lim: 35 exec/s: 61 rss: 74Mb L: 26/35 MS: 1 PersAutoDict- DE: "t\261J\036\215c\027\000"- 00:07:23.691 [2024-07-24 22:46:21.782734] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:23.691 [2024-07-24 22:46:21.782853] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:23.691 [2024-07-24 22:46:21.783085] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:17001d2a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.691 [2024-07-24 22:46:21.783111] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:23.691 [2024-07-24 22:46:21.783164] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:0a00003f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.691 [2024-07-24 22:46:21.783177] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:23.691 #62 NEW cov: 12234 ft: 15425 corp: 39/731b lim: 35 exec/s: 62 rss: 74Mb L: 19/35 MS: 1 ChangeBit- 00:07:23.691 [2024-07-24 22:46:21.832894] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:23.691 [2024-07-24 22:46:21.833239] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:17001d2a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.691 [2024-07-24 22:46:21.833264] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:23.691 [2024-07-24 22:46:21.833317] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:0000000d cdw11:0a00003f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.691 [2024-07-24 22:46:21.833328] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:23.691 #63 NEW cov: 12234 ft: 15435 corp: 40/750b lim: 35 exec/s: 63 rss: 74Mb L: 19/35 MS: 1 CMP- DE: "\015\000\000\000"- 00:07:23.691 [2024-07-24 22:46:21.873549] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff0031 cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.691 [2024-07-24 22:46:21.873571] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:23.691 [2024-07-24 22:46:21.873624] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.691 [2024-07-24 22:46:21.873635] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:23.691 [2024-07-24 22:46:21.873689] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.691 [2024-07-24 22:46:21.873700] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:23.691 [2024-07-24 22:46:21.873750] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.691 [2024-07-24 22:46:21.873760] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:23.950 #65 NEW cov: 12234 ft: 15457 corp: 41/784b lim: 35 exec/s: 65 rss: 74Mb L: 34/35 MS: 2 InsertByte-InsertRepeatedBytes- 00:07:23.950 [2024-07-24 22:46:21.913173] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:23.950 [2024-07-24 22:46:21.913414] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:23.950 [2024-07-24 22:46:21.913534] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:23.950 [2024-07-24 22:46:21.913753] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:1e6d0000 cdw11:6d006d6d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.950 [2024-07-24 22:46:21.913777] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:23.950 [2024-07-24 22:46:21.913829] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:0000006d cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.950 [2024-07-24 22:46:21.913840] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:23.951 [2024-07-24 22:46:21.913893] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.951 [2024-07-24 22:46:21.913906] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:23.951 [2024-07-24 22:46:21.913957] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.951 [2024-07-24 22:46:21.913971] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:23.951 #66 NEW cov: 12234 ft: 15468 corp: 42/817b lim: 35 exec/s: 33 rss: 74Mb L: 33/35 MS: 1 InsertRepeatedBytes- 00:07:23.951 #66 DONE cov: 12234 ft: 15468 corp: 42/817b lim: 35 exec/s: 33 rss: 74Mb 00:07:23.951 ###### Recommended dictionary. ###### 00:07:23.951 "\000\000\000\000\000\000\000?" # Uses: 1 00:07:23.951 "\035*\263\242\214c\027\000" # Uses: 2 00:07:23.951 "\000\000\000\030" # Uses: 0 00:07:23.951 "\377\026c\215\012\244\000H" # Uses: 0 00:07:23.951 "t\261J\036\215c\027\000" # Uses: 1 00:07:23.951 "\015\000\000\000" # Uses: 0 00:07:23.951 ###### End of recommended dictionary. ###### 00:07:23.951 Done 66 runs in 2 second(s) 00:07:23.951 22:46:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_2.conf /var/tmp/suppress_nvmf_fuzz 00:07:23.951 22:46:22 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:23.951 22:46:22 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:23.951 22:46:22 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 3 1 0x1 00:07:23.951 22:46:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=3 00:07:23.951 22:46:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:23.951 22:46:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:23.951 22:46:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 00:07:23.951 22:46:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_3.conf 00:07:23.951 22:46:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:23.951 22:46:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:23.951 22:46:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 3 00:07:23.951 22:46:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4403 00:07:23.951 22:46:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 00:07:23.951 22:46:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4403' 00:07:23.951 22:46:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4403"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:23.951 22:46:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:23.951 22:46:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:23.951 22:46:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4403' -c /tmp/fuzz_json_3.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 -Z 3 00:07:23.951 [2024-07-24 22:46:22.112824] Starting SPDK v24.09-pre git sha1 f41dbc235 / DPDK 24.03.0 initialization... 00:07:23.951 [2024-07-24 22:46:22.112889] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid479787 ] 00:07:23.951 EAL: No free 2048 kB hugepages reported on node 1 00:07:24.209 [2024-07-24 22:46:22.360223] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.467 [2024-07-24 22:46:22.443911] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.467 [2024-07-24 22:46:22.502437] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:24.467 [2024-07-24 22:46:22.518678] tcp.c:1007:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4403 *** 00:07:24.467 INFO: Running with entropic power schedule (0xFF, 100). 00:07:24.467 INFO: Seed: 135472348 00:07:24.467 INFO: Loaded 1 modules (358985 inline 8-bit counters): 358985 [0x29c620c, 0x2a1dc55), 00:07:24.467 INFO: Loaded 1 PC tables (358985 PCs): 358985 [0x2a1dc58,0x2f980e8), 00:07:24.467 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 00:07:24.467 INFO: A corpus is not provided, starting from an empty corpus 00:07:24.467 #2 INITED exec/s: 0 rss: 64Mb 00:07:24.467 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:24.467 This may also happen if the target rejected all inputs we tried so far 00:07:24.725 NEW_FUNC[1/689]: 0x488f00 in fuzz_admin_abort_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:114 00:07:24.725 NEW_FUNC[2/689]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:24.725 #16 NEW cov: 11895 ft: 11894 corp: 2/5b lim: 20 exec/s: 0 rss: 71Mb L: 4/4 MS: 4 InsertByte-InsertByte-ChangeBit-CrossOver- 00:07:24.725 #17 NEW cov: 12008 ft: 12433 corp: 3/9b lim: 20 exec/s: 0 rss: 71Mb L: 4/4 MS: 1 ShuffleBytes- 00:07:24.725 #18 NEW cov: 12014 ft: 12643 corp: 4/13b lim: 20 exec/s: 0 rss: 72Mb L: 4/4 MS: 1 ChangeBit- 00:07:24.983 #23 NEW cov: 12099 ft: 12903 corp: 5/19b lim: 20 exec/s: 0 rss: 72Mb L: 6/6 MS: 5 ShuffleBytes-ChangeBit-InsertByte-CrossOver-CrossOver- 00:07:24.983 #24 NEW cov: 12099 ft: 13030 corp: 6/25b lim: 20 exec/s: 0 rss: 72Mb L: 6/6 MS: 1 ChangeBinInt- 00:07:24.983 #25 NEW cov: 12099 ft: 13153 corp: 7/29b lim: 20 exec/s: 0 rss: 72Mb L: 4/6 MS: 1 ShuffleBytes- 00:07:24.983 #26 NEW cov: 12099 ft: 13214 corp: 8/33b lim: 20 exec/s: 0 rss: 72Mb L: 4/6 MS: 1 ChangeBit- 00:07:25.241 #28 NEW cov: 12099 ft: 13262 corp: 9/38b lim: 20 exec/s: 0 rss: 72Mb L: 5/6 MS: 2 ShuffleBytes-CrossOver- 00:07:25.241 #34 NEW cov: 12124 ft: 13707 corp: 10/58b lim: 20 exec/s: 0 rss: 72Mb L: 20/20 MS: 1 InsertRepeatedBytes- 00:07:25.241 #35 NEW cov: 12124 ft: 13874 corp: 11/62b lim: 20 exec/s: 0 rss: 72Mb L: 4/20 MS: 1 ChangeByte- 00:07:25.241 #36 NEW cov: 12124 ft: 13948 corp: 12/66b lim: 20 exec/s: 0 rss: 72Mb L: 4/20 MS: 1 ChangeBit- 00:07:25.499 NEW_FUNC[1/1]: 0x1a887d0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:07:25.499 #37 NEW cov: 12141 ft: 14033 corp: 13/86b lim: 20 exec/s: 0 rss: 72Mb L: 20/20 MS: 1 ChangeByte- 00:07:25.499 #38 NEW cov: 12149 ft: 14190 corp: 14/100b lim: 20 exec/s: 38 rss: 72Mb L: 14/20 MS: 1 EraseBytes- 00:07:25.499 #39 NEW cov: 12150 ft: 14255 corp: 15/116b lim: 20 exec/s: 39 rss: 72Mb L: 16/20 MS: 1 CrossOver- 00:07:25.757 #40 NEW cov: 12150 ft: 14315 corp: 16/130b lim: 20 exec/s: 40 rss: 72Mb L: 14/20 MS: 1 ChangeBit- 00:07:25.757 #41 NEW cov: 12150 ft: 14381 corp: 17/144b lim: 20 exec/s: 41 rss: 72Mb L: 14/20 MS: 1 ChangeBinInt- 00:07:25.757 #42 NEW cov: 12150 ft: 14411 corp: 18/164b lim: 20 exec/s: 42 rss: 72Mb L: 20/20 MS: 1 InsertRepeatedBytes- 00:07:25.757 #43 NEW cov: 12150 ft: 14441 corp: 19/179b lim: 20 exec/s: 43 rss: 72Mb L: 15/20 MS: 1 InsertRepeatedBytes- 00:07:26.015 #44 NEW cov: 12150 ft: 14488 corp: 20/191b lim: 20 exec/s: 44 rss: 72Mb L: 12/20 MS: 1 CMP- DE: "\000\027c\222\240Wx$"- 00:07:26.015 #46 NEW cov: 12150 ft: 14528 corp: 21/196b lim: 20 exec/s: 46 rss: 72Mb L: 5/20 MS: 2 EraseBytes-CrossOver- 00:07:26.015 #47 NEW cov: 12150 ft: 14588 corp: 22/200b lim: 20 exec/s: 47 rss: 72Mb L: 4/20 MS: 1 CopyPart- 00:07:26.015 #48 NEW cov: 12150 ft: 14621 corp: 23/205b lim: 20 exec/s: 48 rss: 72Mb L: 5/20 MS: 1 ChangeByte- 00:07:26.273 #49 NEW cov: 12150 ft: 14626 corp: 24/211b lim: 20 exec/s: 49 rss: 73Mb L: 6/20 MS: 1 CrossOver- 00:07:26.273 #50 NEW cov: 12150 ft: 14636 corp: 25/225b lim: 20 exec/s: 50 rss: 73Mb L: 14/20 MS: 1 PersAutoDict- DE: "\000\027c\222\240Wx$"- 00:07:26.273 #51 NEW cov: 12150 ft: 14653 corp: 26/242b lim: 20 exec/s: 51 rss: 73Mb L: 17/20 MS: 1 CopyPart- 00:07:26.532 #52 NEW cov: 12158 ft: 14857 corp: 27/252b lim: 20 exec/s: 52 rss: 73Mb L: 10/20 MS: 1 EraseBytes- 00:07:26.532 #53 NEW cov: 12158 ft: 14926 corp: 28/268b lim: 20 exec/s: 53 rss: 73Mb L: 16/20 MS: 1 ChangeBit- 00:07:26.532 #54 NEW cov: 12158 ft: 14952 corp: 29/277b lim: 20 exec/s: 27 rss: 73Mb L: 9/20 MS: 1 EraseBytes- 00:07:26.532 #54 DONE cov: 12158 ft: 14952 corp: 29/277b lim: 20 exec/s: 27 rss: 73Mb 00:07:26.532 ###### Recommended dictionary. ###### 00:07:26.532 "\000\027c\222\240Wx$" # Uses: 1 00:07:26.532 ###### End of recommended dictionary. ###### 00:07:26.532 Done 54 runs in 2 second(s) 00:07:26.532 22:46:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_3.conf /var/tmp/suppress_nvmf_fuzz 00:07:26.532 22:46:24 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:26.532 22:46:24 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:26.532 22:46:24 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 4 1 0x1 00:07:26.532 22:46:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=4 00:07:26.532 22:46:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:26.532 22:46:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:26.532 22:46:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 00:07:26.532 22:46:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_4.conf 00:07:26.532 22:46:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:26.532 22:46:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:26.532 22:46:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 4 00:07:26.532 22:46:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4404 00:07:26.532 22:46:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 00:07:26.532 22:46:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4404' 00:07:26.532 22:46:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4404"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:26.532 22:46:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:26.532 22:46:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:26.532 22:46:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4404' -c /tmp/fuzz_json_4.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 -Z 4 00:07:26.790 [2024-07-24 22:46:24.758582] Starting SPDK v24.09-pre git sha1 f41dbc235 / DPDK 24.03.0 initialization... 00:07:26.790 [2024-07-24 22:46:24.758653] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid480222 ] 00:07:26.790 EAL: No free 2048 kB hugepages reported on node 1 00:07:27.048 [2024-07-24 22:46:25.007638] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.048 [2024-07-24 22:46:25.086984] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.048 [2024-07-24 22:46:25.145427] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:27.048 [2024-07-24 22:46:25.161671] tcp.c:1007:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4404 *** 00:07:27.048 INFO: Running with entropic power schedule (0xFF, 100). 00:07:27.048 INFO: Seed: 2778459092 00:07:27.048 INFO: Loaded 1 modules (358985 inline 8-bit counters): 358985 [0x29c620c, 0x2a1dc55), 00:07:27.048 INFO: Loaded 1 PC tables (358985 PCs): 358985 [0x2a1dc58,0x2f980e8), 00:07:27.048 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 00:07:27.048 INFO: A corpus is not provided, starting from an empty corpus 00:07:27.048 #2 INITED exec/s: 0 rss: 64Mb 00:07:27.048 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:27.048 This may also happen if the target rejected all inputs we tried so far 00:07:27.048 [2024-07-24 22:46:25.217261] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.048 [2024-07-24 22:46:25.217292] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:27.048 [2024-07-24 22:46:25.217360] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.048 [2024-07-24 22:46:25.217375] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:27.306 NEW_FUNC[1/701]: 0x489ff0 in fuzz_admin_create_io_completion_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:126 00:07:27.306 NEW_FUNC[2/701]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:27.306 #12 NEW cov: 12019 ft: 12018 corp: 2/19b lim: 35 exec/s: 0 rss: 70Mb L: 18/18 MS: 5 ChangeBit-CopyPart-CopyPart-ChangeByte-InsertRepeatedBytes- 00:07:27.306 [2024-07-24 22:46:25.368642] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.306 [2024-07-24 22:46:25.368722] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:27.306 [2024-07-24 22:46:25.368827] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.306 [2024-07-24 22:46:25.368869] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:27.306 [2024-07-24 22:46:25.368973] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.306 [2024-07-24 22:46:25.369002] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:27.306 #23 NEW cov: 12132 ft: 12934 corp: 3/41b lim: 35 exec/s: 0 rss: 70Mb L: 22/22 MS: 1 InsertRepeatedBytes- 00:07:27.306 [2024-07-24 22:46:25.418054] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.306 [2024-07-24 22:46:25.418082] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:27.306 [2024-07-24 22:46:25.418141] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.306 [2024-07-24 22:46:25.418153] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:27.306 [2024-07-24 22:46:25.418206] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.306 [2024-07-24 22:46:25.418217] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:27.306 [2024-07-24 22:46:25.418271] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.306 [2024-07-24 22:46:25.418281] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:27.306 #24 NEW cov: 12138 ft: 13412 corp: 4/74b lim: 35 exec/s: 0 rss: 70Mb L: 33/33 MS: 1 CopyPart- 00:07:27.306 [2024-07-24 22:46:25.467819] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:08080808 cdw11:08080000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.306 [2024-07-24 22:46:25.467840] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:27.306 [2024-07-24 22:46:25.467897] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:08080808 cdw11:08080000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.306 [2024-07-24 22:46:25.467907] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:27.306 #25 NEW cov: 12223 ft: 13730 corp: 5/92b lim: 35 exec/s: 0 rss: 70Mb L: 18/33 MS: 1 InsertRepeatedBytes- 00:07:27.307 [2024-07-24 22:46:25.508326] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.307 [2024-07-24 22:46:25.508353] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:27.307 [2024-07-24 22:46:25.508413] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.307 [2024-07-24 22:46:25.508426] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:27.307 [2024-07-24 22:46:25.508480] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.307 [2024-07-24 22:46:25.508494] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:27.307 [2024-07-24 22:46:25.508548] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.307 [2024-07-24 22:46:25.508559] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:27.565 #26 NEW cov: 12223 ft: 13778 corp: 6/125b lim: 35 exec/s: 0 rss: 70Mb L: 33/33 MS: 1 ChangeBit- 00:07:27.565 [2024-07-24 22:46:25.558474] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.565 [2024-07-24 22:46:25.558498] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:27.565 [2024-07-24 22:46:25.558569] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.565 [2024-07-24 22:46:25.558581] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:27.565 [2024-07-24 22:46:25.558635] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.565 [2024-07-24 22:46:25.558646] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:27.565 [2024-07-24 22:46:25.558699] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.565 [2024-07-24 22:46:25.558710] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:27.565 #27 NEW cov: 12223 ft: 13821 corp: 7/158b lim: 35 exec/s: 0 rss: 70Mb L: 33/33 MS: 1 ChangeBinInt- 00:07:27.565 [2024-07-24 22:46:25.598561] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.565 [2024-07-24 22:46:25.598585] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:27.565 [2024-07-24 22:46:25.598640] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.565 [2024-07-24 22:46:25.598652] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:27.565 [2024-07-24 22:46:25.598706] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.565 [2024-07-24 22:46:25.598716] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:27.565 [2024-07-24 22:46:25.598769] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.565 [2024-07-24 22:46:25.598780] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:27.565 #28 NEW cov: 12223 ft: 13907 corp: 8/191b lim: 35 exec/s: 0 rss: 70Mb L: 33/33 MS: 1 ChangeByte- 00:07:27.565 [2024-07-24 22:46:25.648202] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.565 [2024-07-24 22:46:25.648225] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:27.565 #29 NEW cov: 12223 ft: 14685 corp: 9/200b lim: 35 exec/s: 0 rss: 71Mb L: 9/33 MS: 1 EraseBytes- 00:07:27.565 [2024-07-24 22:46:25.698835] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00008000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.565 [2024-07-24 22:46:25.698860] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:27.565 [2024-07-24 22:46:25.698914] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.565 [2024-07-24 22:46:25.698925] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:27.565 [2024-07-24 22:46:25.698977] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.565 [2024-07-24 22:46:25.698987] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:27.565 [2024-07-24 22:46:25.699042] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.565 [2024-07-24 22:46:25.699052] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:27.565 #30 NEW cov: 12223 ft: 14768 corp: 10/233b lim: 35 exec/s: 0 rss: 71Mb L: 33/33 MS: 1 ChangeBit- 00:07:27.565 [2024-07-24 22:46:25.738958] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.565 [2024-07-24 22:46:25.738981] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:27.565 [2024-07-24 22:46:25.739036] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.565 [2024-07-24 22:46:25.739047] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:27.565 [2024-07-24 22:46:25.739104] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.565 [2024-07-24 22:46:25.739115] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:27.565 [2024-07-24 22:46:25.739187] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.565 [2024-07-24 22:46:25.739198] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:27.823 #31 NEW cov: 12223 ft: 14800 corp: 11/266b lim: 35 exec/s: 0 rss: 71Mb L: 33/33 MS: 1 ChangeBinInt- 00:07:27.823 [2024-07-24 22:46:25.789117] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:40000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.823 [2024-07-24 22:46:25.789140] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:27.823 [2024-07-24 22:46:25.789196] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.823 [2024-07-24 22:46:25.789206] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:27.823 [2024-07-24 22:46:25.789259] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.823 [2024-07-24 22:46:25.789269] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:27.823 [2024-07-24 22:46:25.789322] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.824 [2024-07-24 22:46:25.789333] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:27.824 #32 NEW cov: 12223 ft: 14824 corp: 12/297b lim: 35 exec/s: 0 rss: 71Mb L: 31/33 MS: 1 EraseBytes- 00:07:27.824 [2024-07-24 22:46:25.838906] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:08080808 cdw11:08080000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.824 [2024-07-24 22:46:25.838929] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:27.824 [2024-07-24 22:46:25.838983] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:08081208 cdw11:08080000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.824 [2024-07-24 22:46:25.838994] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:27.824 #33 NEW cov: 12223 ft: 14850 corp: 13/315b lim: 35 exec/s: 0 rss: 71Mb L: 18/33 MS: 1 ChangeBinInt- 00:07:27.824 [2024-07-24 22:46:25.889036] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.824 [2024-07-24 22:46:25.889058] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:27.824 [2024-07-24 22:46:25.889135] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.824 [2024-07-24 22:46:25.889147] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:27.824 #34 NEW cov: 12223 ft: 14914 corp: 14/334b lim: 35 exec/s: 0 rss: 71Mb L: 19/33 MS: 1 CrossOver- 00:07:27.824 [2024-07-24 22:46:25.929522] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00810000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.824 [2024-07-24 22:46:25.929546] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:27.824 [2024-07-24 22:46:25.929616] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.824 [2024-07-24 22:46:25.929627] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:27.824 [2024-07-24 22:46:25.929683] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.824 [2024-07-24 22:46:25.929694] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:27.824 [2024-07-24 22:46:25.929748] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.824 [2024-07-24 22:46:25.929759] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:27.824 #35 NEW cov: 12223 ft: 14936 corp: 15/368b lim: 35 exec/s: 0 rss: 71Mb L: 34/34 MS: 1 InsertByte- 00:07:27.824 [2024-07-24 22:46:25.969443] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.824 [2024-07-24 22:46:25.969465] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:27.824 [2024-07-24 22:46:25.969518] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.824 [2024-07-24 22:46:25.969530] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:27.824 [2024-07-24 22:46:25.969583] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.824 [2024-07-24 22:46:25.969596] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:27.824 #36 NEW cov: 12223 ft: 14971 corp: 16/393b lim: 35 exec/s: 0 rss: 71Mb L: 25/34 MS: 1 EraseBytes- 00:07:27.824 [2024-07-24 22:46:26.009376] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.824 [2024-07-24 22:46:26.009398] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:27.824 [2024-07-24 22:46:26.009454] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:7aff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.824 [2024-07-24 22:46:26.009465] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:28.082 #37 NEW cov: 12223 ft: 15031 corp: 17/413b lim: 35 exec/s: 0 rss: 71Mb L: 20/34 MS: 1 InsertByte- 00:07:28.082 [2024-07-24 22:46:26.059340] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.082 [2024-07-24 22:46:26.059362] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:28.082 NEW_FUNC[1/1]: 0x1a887d0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:07:28.082 #38 NEW cov: 12246 ft: 15151 corp: 18/422b lim: 35 exec/s: 0 rss: 72Mb L: 9/34 MS: 1 ChangeBinInt- 00:07:28.082 [2024-07-24 22:46:26.120055] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00810000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.082 [2024-07-24 22:46:26.120091] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:28.082 [2024-07-24 22:46:26.120163] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.082 [2024-07-24 22:46:26.120175] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:28.082 [2024-07-24 22:46:26.120229] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:f8000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.082 [2024-07-24 22:46:26.120250] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:28.082 [2024-07-24 22:46:26.120301] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.082 [2024-07-24 22:46:26.120312] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:28.082 #39 NEW cov: 12246 ft: 15177 corp: 19/456b lim: 35 exec/s: 0 rss: 72Mb L: 34/34 MS: 1 ChangeBinInt- 00:07:28.082 [2024-07-24 22:46:26.169837] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:08080808 cdw11:08080000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.082 [2024-07-24 22:46:26.169859] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:28.082 [2024-07-24 22:46:26.169916] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:08081208 cdw11:08080000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.082 [2024-07-24 22:46:26.169926] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:28.082 #40 NEW cov: 12246 ft: 15199 corp: 20/474b lim: 35 exec/s: 40 rss: 72Mb L: 18/34 MS: 1 ShuffleBytes- 00:07:28.082 [2024-07-24 22:46:26.220361] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:81000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.082 [2024-07-24 22:46:26.220386] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:28.082 [2024-07-24 22:46:26.220459] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.082 [2024-07-24 22:46:26.220471] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:28.082 [2024-07-24 22:46:26.220526] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.082 [2024-07-24 22:46:26.220537] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:28.082 [2024-07-24 22:46:26.220590] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.082 [2024-07-24 22:46:26.220600] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:28.082 #41 NEW cov: 12246 ft: 15268 corp: 21/508b lim: 35 exec/s: 41 rss: 72Mb L: 34/34 MS: 1 ShuffleBytes- 00:07:28.082 [2024-07-24 22:46:26.259875] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.082 [2024-07-24 22:46:26.259896] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:28.082 #42 NEW cov: 12246 ft: 15327 corp: 22/517b lim: 35 exec/s: 42 rss: 72Mb L: 9/34 MS: 1 CrossOver- 00:07:28.340 [2024-07-24 22:46:26.300193] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:08080808 cdw11:40080000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.340 [2024-07-24 22:46:26.300217] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:28.340 [2024-07-24 22:46:26.300286] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:08081208 cdw11:08080000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.340 [2024-07-24 22:46:26.300297] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:28.340 #43 NEW cov: 12246 ft: 15392 corp: 23/535b lim: 35 exec/s: 43 rss: 72Mb L: 18/34 MS: 1 ChangeByte- 00:07:28.340 [2024-07-24 22:46:26.350342] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:08080808 cdw11:08080000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.340 [2024-07-24 22:46:26.350364] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:28.340 [2024-07-24 22:46:26.350434] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:08081208 cdw11:08080000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.340 [2024-07-24 22:46:26.350446] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:28.340 #44 NEW cov: 12246 ft: 15422 corp: 24/553b lim: 35 exec/s: 44 rss: 72Mb L: 18/34 MS: 1 ChangeByte- 00:07:28.340 [2024-07-24 22:46:26.390787] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.341 [2024-07-24 22:46:26.390809] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:28.341 [2024-07-24 22:46:26.390878] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.341 [2024-07-24 22:46:26.390889] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:28.341 [2024-07-24 22:46:26.390943] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00008000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.341 [2024-07-24 22:46:26.390956] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:28.341 [2024-07-24 22:46:26.391011] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.341 [2024-07-24 22:46:26.391022] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:28.341 #45 NEW cov: 12246 ft: 15435 corp: 25/586b lim: 35 exec/s: 45 rss: 72Mb L: 33/34 MS: 1 ChangeBit- 00:07:28.341 [2024-07-24 22:46:26.430562] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:f7f708f2 cdw11:bf080000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.341 [2024-07-24 22:46:26.430584] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:28.341 [2024-07-24 22:46:26.430639] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:08081208 cdw11:08080000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.341 [2024-07-24 22:46:26.430651] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:28.341 #46 NEW cov: 12246 ft: 15476 corp: 26/604b lim: 35 exec/s: 46 rss: 72Mb L: 18/34 MS: 1 ChangeBinInt- 00:07:28.341 [2024-07-24 22:46:26.480523] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:08081208 cdw11:08080000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.341 [2024-07-24 22:46:26.480544] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:28.341 #47 NEW cov: 12246 ft: 15564 corp: 27/615b lim: 35 exec/s: 47 rss: 72Mb L: 11/34 MS: 1 EraseBytes- 00:07:28.341 [2024-07-24 22:46:26.530704] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:bfffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.341 [2024-07-24 22:46:26.530727] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:28.599 #48 NEW cov: 12246 ft: 15571 corp: 28/624b lim: 35 exec/s: 48 rss: 72Mb L: 9/34 MS: 1 ChangeBit- 00:07:28.599 [2024-07-24 22:46:26.580814] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:31ffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.599 [2024-07-24 22:46:26.580836] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:28.599 #49 NEW cov: 12246 ft: 15580 corp: 29/633b lim: 35 exec/s: 49 rss: 72Mb L: 9/34 MS: 1 ChangeByte- 00:07:28.599 [2024-07-24 22:46:26.621629] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.599 [2024-07-24 22:46:26.621651] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:28.599 [2024-07-24 22:46:26.621721] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.599 [2024-07-24 22:46:26.621732] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:28.599 [2024-07-24 22:46:26.621786] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.599 [2024-07-24 22:46:26.621797] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:28.599 [2024-07-24 22:46:26.621850] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.599 [2024-07-24 22:46:26.621864] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:28.599 [2024-07-24 22:46:26.621918] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.599 [2024-07-24 22:46:26.621929] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:28.599 #50 NEW cov: 12246 ft: 15647 corp: 30/668b lim: 35 exec/s: 50 rss: 72Mb L: 35/35 MS: 1 CopyPart- 00:07:28.599 [2024-07-24 22:46:26.661527] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.599 [2024-07-24 22:46:26.661549] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:28.599 [2024-07-24 22:46:26.661603] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.599 [2024-07-24 22:46:26.661614] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:28.599 [2024-07-24 22:46:26.661666] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:94000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.599 [2024-07-24 22:46:26.661677] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:28.599 [2024-07-24 22:46:26.661729] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.599 [2024-07-24 22:46:26.661739] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:28.599 #51 NEW cov: 12246 ft: 15651 corp: 31/702b lim: 35 exec/s: 51 rss: 72Mb L: 34/35 MS: 1 InsertByte- 00:07:28.599 [2024-07-24 22:46:26.701711] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00008000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.599 [2024-07-24 22:46:26.701733] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:28.599 [2024-07-24 22:46:26.701788] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.599 [2024-07-24 22:46:26.701799] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:28.599 [2024-07-24 22:46:26.701851] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.599 [2024-07-24 22:46:26.701862] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:28.599 [2024-07-24 22:46:26.701916] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00001f00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.599 [2024-07-24 22:46:26.701927] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:28.599 #52 NEW cov: 12246 ft: 15662 corp: 32/735b lim: 35 exec/s: 52 rss: 72Mb L: 33/35 MS: 1 ChangeByte- 00:07:28.599 [2024-07-24 22:46:26.751828] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.599 [2024-07-24 22:46:26.751849] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:28.599 [2024-07-24 22:46:26.751905] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.599 [2024-07-24 22:46:26.751915] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:28.599 [2024-07-24 22:46:26.751986] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:930000fa cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.599 [2024-07-24 22:46:26.751997] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:28.599 [2024-07-24 22:46:26.752050] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.599 [2024-07-24 22:46:26.752060] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:28.600 #53 NEW cov: 12246 ft: 15721 corp: 33/769b lim: 35 exec/s: 53 rss: 72Mb L: 34/35 MS: 1 ChangeBinInt- 00:07:28.600 [2024-07-24 22:46:26.801492] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffbfff cdw11:ff090000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.600 [2024-07-24 22:46:26.801514] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:28.857 #54 NEW cov: 12246 ft: 15733 corp: 34/776b lim: 35 exec/s: 54 rss: 72Mb L: 7/35 MS: 1 EraseBytes- 00:07:28.857 [2024-07-24 22:46:26.851943] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.857 [2024-07-24 22:46:26.851964] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:28.857 [2024-07-24 22:46:26.852019] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.857 [2024-07-24 22:46:26.852029] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:28.857 [2024-07-24 22:46:26.852085] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.857 [2024-07-24 22:46:26.852096] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:28.857 #55 NEW cov: 12246 ft: 15749 corp: 35/798b lim: 35 exec/s: 55 rss: 72Mb L: 22/35 MS: 1 CopyPart- 00:07:28.857 [2024-07-24 22:46:26.891708] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:32ffbfff cdw11:ff090000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.857 [2024-07-24 22:46:26.891732] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:28.857 #56 NEW cov: 12246 ft: 15769 corp: 36/805b lim: 35 exec/s: 56 rss: 72Mb L: 7/35 MS: 1 ChangeByte- 00:07:28.857 [2024-07-24 22:46:26.941848] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffff0aff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.857 [2024-07-24 22:46:26.941871] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:28.857 #57 NEW cov: 12246 ft: 15777 corp: 37/815b lim: 35 exec/s: 57 rss: 72Mb L: 10/35 MS: 1 CrossOver- 00:07:28.857 [2024-07-24 22:46:26.982326] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.857 [2024-07-24 22:46:26.982348] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:28.857 [2024-07-24 22:46:26.982401] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.857 [2024-07-24 22:46:26.982413] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:28.857 [2024-07-24 22:46:26.982465] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.857 [2024-07-24 22:46:26.982478] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:28.857 #58 NEW cov: 12246 ft: 15794 corp: 38/840b lim: 35 exec/s: 58 rss: 73Mb L: 25/35 MS: 1 ShuffleBytes- 00:07:28.857 [2024-07-24 22:46:27.032096] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:0000ff00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.857 [2024-07-24 22:46:27.032120] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:29.117 #59 NEW cov: 12246 ft: 15810 corp: 39/849b lim: 35 exec/s: 59 rss: 73Mb L: 9/35 MS: 1 CopyPart- 00:07:29.117 [2024-07-24 22:46:27.082767] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.117 [2024-07-24 22:46:27.082791] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:29.117 [2024-07-24 22:46:27.082861] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.117 [2024-07-24 22:46:27.082873] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:29.117 [2024-07-24 22:46:27.082926] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.117 [2024-07-24 22:46:27.082936] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:29.117 [2024-07-24 22:46:27.082988] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.117 [2024-07-24 22:46:27.082999] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:29.117 #60 NEW cov: 12246 ft: 15835 corp: 40/882b lim: 35 exec/s: 60 rss: 73Mb L: 33/35 MS: 1 ChangeBit- 00:07:29.117 [2024-07-24 22:46:27.132583] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:0808ff00 cdw11:08080000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.117 [2024-07-24 22:46:27.132606] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:29.117 [2024-07-24 22:46:27.132660] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:08081208 cdw11:08080000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.117 [2024-07-24 22:46:27.132671] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:29.117 #61 NEW cov: 12246 ft: 15838 corp: 41/900b lim: 35 exec/s: 61 rss: 73Mb L: 18/35 MS: 1 CMP- DE: "\377\000"- 00:07:29.117 [2024-07-24 22:46:27.173083] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.117 [2024-07-24 22:46:27.173106] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:29.117 [2024-07-24 22:46:27.173175] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.117 [2024-07-24 22:46:27.173187] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:29.117 [2024-07-24 22:46:27.173240] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.117 [2024-07-24 22:46:27.173250] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:29.117 [2024-07-24 22:46:27.173307] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:c0000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.117 [2024-07-24 22:46:27.173318] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:29.117 #62 NEW cov: 12246 ft: 15848 corp: 42/933b lim: 35 exec/s: 62 rss: 73Mb L: 33/35 MS: 1 ChangeByte- 00:07:29.117 [2024-07-24 22:46:27.213185] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:81000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.117 [2024-07-24 22:46:27.213207] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:29.117 [2024-07-24 22:46:27.213278] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.117 [2024-07-24 22:46:27.213289] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:29.117 [2024-07-24 22:46:27.213352] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.117 [2024-07-24 22:46:27.213362] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:29.117 [2024-07-24 22:46:27.213415] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.117 [2024-07-24 22:46:27.213425] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:29.117 #63 NEW cov: 12246 ft: 15866 corp: 43/967b lim: 35 exec/s: 31 rss: 73Mb L: 34/35 MS: 1 ShuffleBytes- 00:07:29.117 #63 DONE cov: 12246 ft: 15866 corp: 43/967b lim: 35 exec/s: 31 rss: 73Mb 00:07:29.117 ###### Recommended dictionary. ###### 00:07:29.117 "\377\000" # Uses: 0 00:07:29.117 ###### End of recommended dictionary. ###### 00:07:29.117 Done 63 runs in 2 second(s) 00:07:29.376 22:46:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_4.conf /var/tmp/suppress_nvmf_fuzz 00:07:29.376 22:46:27 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:29.376 22:46:27 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:29.376 22:46:27 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 5 1 0x1 00:07:29.376 22:46:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=5 00:07:29.376 22:46:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:29.376 22:46:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:29.376 22:46:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 00:07:29.376 22:46:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_5.conf 00:07:29.376 22:46:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:29.376 22:46:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:29.376 22:46:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 5 00:07:29.376 22:46:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4405 00:07:29.376 22:46:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 00:07:29.376 22:46:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4405' 00:07:29.376 22:46:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4405"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:29.376 22:46:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:29.376 22:46:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:29.376 22:46:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4405' -c /tmp/fuzz_json_5.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 -Z 5 00:07:29.376 [2024-07-24 22:46:27.411993] Starting SPDK v24.09-pre git sha1 f41dbc235 / DPDK 24.03.0 initialization... 00:07:29.376 [2024-07-24 22:46:27.412061] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid480660 ] 00:07:29.377 EAL: No free 2048 kB hugepages reported on node 1 00:07:29.635 [2024-07-24 22:46:27.663001] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.635 [2024-07-24 22:46:27.741246] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.635 [2024-07-24 22:46:27.799724] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:29.635 [2024-07-24 22:46:27.815963] tcp.c:1007:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4405 *** 00:07:29.635 INFO: Running with entropic power schedule (0xFF, 100). 00:07:29.635 INFO: Seed: 1136486162 00:07:29.895 INFO: Loaded 1 modules (358985 inline 8-bit counters): 358985 [0x29c620c, 0x2a1dc55), 00:07:29.895 INFO: Loaded 1 PC tables (358985 PCs): 358985 [0x2a1dc58,0x2f980e8), 00:07:29.895 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 00:07:29.895 INFO: A corpus is not provided, starting from an empty corpus 00:07:29.895 #2 INITED exec/s: 0 rss: 65Mb 00:07:29.895 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:29.895 This may also happen if the target rejected all inputs we tried so far 00:07:29.895 [2024-07-24 22:46:27.871453] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000a56 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.895 [2024-07-24 22:46:27.871485] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:29.895 NEW_FUNC[1/701]: 0x48c180 in fuzz_admin_create_io_submission_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:142 00:07:29.895 NEW_FUNC[2/701]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:29.895 #9 NEW cov: 12027 ft: 12017 corp: 2/12b lim: 45 exec/s: 0 rss: 71Mb L: 11/11 MS: 2 CMP-CMP- DE: "\001\""-"V\000\000\000\000\000\000\000"- 00:07:29.895 [2024-07-24 22:46:28.021932] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000256 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.895 [2024-07-24 22:46:28.021981] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:29.895 #10 NEW cov: 12143 ft: 12548 corp: 3/23b lim: 45 exec/s: 0 rss: 71Mb L: 11/11 MS: 1 ChangeBinInt- 00:07:29.895 [2024-07-24 22:46:28.081774] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00560256 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.895 [2024-07-24 22:46:28.081798] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:30.154 #11 NEW cov: 12149 ft: 12823 corp: 4/34b lim: 45 exec/s: 0 rss: 71Mb L: 11/11 MS: 1 PersAutoDict- DE: "V\000\000\000\000\000\000\000"- 00:07:30.154 [2024-07-24 22:46:28.131945] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00010a56 cdw11:22000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.154 [2024-07-24 22:46:28.131968] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:30.154 #12 NEW cov: 12234 ft: 13155 corp: 5/45b lim: 45 exec/s: 0 rss: 71Mb L: 11/11 MS: 1 PersAutoDict- DE: "\001\""- 00:07:30.154 [2024-07-24 22:46:28.172026] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00560256 cdw11:00100000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.154 [2024-07-24 22:46:28.172048] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:30.154 #13 NEW cov: 12234 ft: 13299 corp: 6/56b lim: 45 exec/s: 0 rss: 72Mb L: 11/11 MS: 1 ChangeBit- 00:07:30.154 [2024-07-24 22:46:28.222219] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:02560256 cdw11:00100000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.154 [2024-07-24 22:46:28.222241] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:30.154 #14 NEW cov: 12234 ft: 13374 corp: 7/67b lim: 45 exec/s: 0 rss: 72Mb L: 11/11 MS: 1 ChangeBit- 00:07:30.154 [2024-07-24 22:46:28.272316] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000256 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.154 [2024-07-24 22:46:28.272340] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:30.154 #20 NEW cov: 12234 ft: 13477 corp: 8/78b lim: 45 exec/s: 0 rss: 72Mb L: 11/11 MS: 1 PersAutoDict- DE: "V\000\000\000\000\000\000\000"- 00:07:30.154 [2024-07-24 22:46:28.312466] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:56000aff cdw11:01220000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.154 [2024-07-24 22:46:28.312489] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:30.154 #21 NEW cov: 12234 ft: 13532 corp: 9/90b lim: 45 exec/s: 0 rss: 72Mb L: 12/12 MS: 1 InsertByte- 00:07:30.412 [2024-07-24 22:46:28.362637] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00560256 cdw11:00100000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.412 [2024-07-24 22:46:28.362660] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:30.412 #22 NEW cov: 12234 ft: 13585 corp: 10/101b lim: 45 exec/s: 0 rss: 72Mb L: 11/12 MS: 1 CrossOver- 00:07:30.412 [2024-07-24 22:46:28.412729] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:0000feaa cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.412 [2024-07-24 22:46:28.412751] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:30.412 #28 NEW cov: 12234 ft: 13622 corp: 11/112b lim: 45 exec/s: 0 rss: 72Mb L: 11/12 MS: 1 ChangeBinInt- 00:07:30.412 [2024-07-24 22:46:28.462838] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:5600d902 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.412 [2024-07-24 22:46:28.462860] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:30.413 #29 NEW cov: 12234 ft: 13638 corp: 12/124b lim: 45 exec/s: 0 rss: 72Mb L: 12/12 MS: 1 InsertByte- 00:07:30.413 [2024-07-24 22:46:28.502977] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000256 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.413 [2024-07-24 22:46:28.502999] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:30.413 #30 NEW cov: 12234 ft: 13675 corp: 13/135b lim: 45 exec/s: 0 rss: 72Mb L: 11/12 MS: 1 CrossOver- 00:07:30.413 [2024-07-24 22:46:28.543110] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000256 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.413 [2024-07-24 22:46:28.543132] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:30.413 #31 NEW cov: 12234 ft: 13687 corp: 14/148b lim: 45 exec/s: 0 rss: 72Mb L: 13/13 MS: 1 CopyPart- 00:07:30.413 [2024-07-24 22:46:28.583205] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffff0355 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.413 [2024-07-24 22:46:28.583230] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:30.671 #32 NEW cov: 12234 ft: 13700 corp: 15/159b lim: 45 exec/s: 0 rss: 72Mb L: 11/13 MS: 1 ChangeBinInt- 00:07:30.671 [2024-07-24 22:46:28.633548] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00560256 cdw11:00100000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.671 [2024-07-24 22:46:28.633570] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:30.671 [2024-07-24 22:46:28.633638] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.671 [2024-07-24 22:46:28.633649] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:30.671 #33 NEW cov: 12234 ft: 14453 corp: 16/178b lim: 45 exec/s: 0 rss: 72Mb L: 19/19 MS: 1 InsertRepeatedBytes- 00:07:30.671 [2024-07-24 22:46:28.673621] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.671 [2024-07-24 22:46:28.673642] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:30.671 [2024-07-24 22:46:28.673709] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:5600ff02 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.671 [2024-07-24 22:46:28.673721] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:30.671 #34 NEW cov: 12234 ft: 14488 corp: 17/199b lim: 45 exec/s: 0 rss: 72Mb L: 21/21 MS: 1 InsertRepeatedBytes- 00:07:30.671 [2024-07-24 22:46:28.723657] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffff0355 cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.671 [2024-07-24 22:46:28.723679] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:30.671 NEW_FUNC[1/1]: 0x1a887d0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:07:30.671 #35 NEW cov: 12257 ft: 14528 corp: 18/210b lim: 45 exec/s: 0 rss: 72Mb L: 11/21 MS: 1 ChangeBinInt- 00:07:30.671 [2024-07-24 22:46:28.773736] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:56005600 cdw11:10000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.671 [2024-07-24 22:46:28.773757] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:30.671 #36 NEW cov: 12257 ft: 14538 corp: 19/220b lim: 45 exec/s: 0 rss: 72Mb L: 10/21 MS: 1 EraseBytes- 00:07:30.671 [2024-07-24 22:46:28.813855] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:0256022f cdw11:00100000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.671 [2024-07-24 22:46:28.813878] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:30.671 #37 NEW cov: 12257 ft: 14611 corp: 20/231b lim: 45 exec/s: 0 rss: 72Mb L: 11/21 MS: 1 ChangeByte- 00:07:30.671 [2024-07-24 22:46:28.854024] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:01560a00 cdw11:22000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.671 [2024-07-24 22:46:28.854046] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:30.671 #38 NEW cov: 12257 ft: 14625 corp: 21/242b lim: 45 exec/s: 38 rss: 72Mb L: 11/21 MS: 1 ShuffleBytes- 00:07:30.930 [2024-07-24 22:46:28.894117] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000200 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.930 [2024-07-24 22:46:28.894139] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:30.930 #39 NEW cov: 12257 ft: 14638 corp: 22/251b lim: 45 exec/s: 39 rss: 72Mb L: 9/21 MS: 1 EraseBytes- 00:07:30.930 [2024-07-24 22:46:28.934226] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:560002d2 cdw11:56000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.930 [2024-07-24 22:46:28.934248] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:30.930 #40 NEW cov: 12257 ft: 14657 corp: 23/263b lim: 45 exec/s: 40 rss: 72Mb L: 12/21 MS: 1 InsertByte- 00:07:30.930 [2024-07-24 22:46:28.974310] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000256 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.930 [2024-07-24 22:46:28.974332] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:30.930 #41 NEW cov: 12257 ft: 14714 corp: 24/274b lim: 45 exec/s: 41 rss: 72Mb L: 11/21 MS: 1 ChangeByte- 00:07:30.930 [2024-07-24 22:46:29.014415] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000256 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.930 [2024-07-24 22:46:29.014437] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:30.930 #42 NEW cov: 12257 ft: 14722 corp: 25/285b lim: 45 exec/s: 42 rss: 72Mb L: 11/21 MS: 1 ShuffleBytes- 00:07:30.930 [2024-07-24 22:46:29.054522] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:56220aff cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.930 [2024-07-24 22:46:29.054542] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:30.930 #43 NEW cov: 12257 ft: 14793 corp: 26/297b lim: 45 exec/s: 43 rss: 72Mb L: 12/21 MS: 1 ShuffleBytes- 00:07:30.930 [2024-07-24 22:46:29.104648] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffff0355 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.930 [2024-07-24 22:46:29.104669] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:30.930 #44 NEW cov: 12257 ft: 14838 corp: 27/308b lim: 45 exec/s: 44 rss: 72Mb L: 11/21 MS: 1 CrossOver- 00:07:31.188 [2024-07-24 22:46:29.145138] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00560256 cdw11:fcfc0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.188 [2024-07-24 22:46:29.145159] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:31.188 [2024-07-24 22:46:29.145209] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:fcfcfcfc cdw11:fcfc0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.188 [2024-07-24 22:46:29.145220] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:31.188 [2024-07-24 22:46:29.145269] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:fcfcfcfc cdw11:fcfc0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.188 [2024-07-24 22:46:29.145280] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:31.188 #45 NEW cov: 12257 ft: 15094 corp: 28/341b lim: 45 exec/s: 45 rss: 72Mb L: 33/33 MS: 1 InsertRepeatedBytes- 00:07:31.188 [2024-07-24 22:46:29.184909] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:000b0256 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.188 [2024-07-24 22:46:29.184930] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:31.188 #46 NEW cov: 12257 ft: 15128 corp: 29/352b lim: 45 exec/s: 46 rss: 72Mb L: 11/33 MS: 1 ChangeBinInt- 00:07:31.188 [2024-07-24 22:46:29.225046] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000056 cdw11:02000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.188 [2024-07-24 22:46:29.225071] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:31.188 #47 NEW cov: 12257 ft: 15137 corp: 30/365b lim: 45 exec/s: 47 rss: 73Mb L: 13/33 MS: 1 ShuffleBytes- 00:07:31.188 [2024-07-24 22:46:29.275178] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00010a0a cdw11:56220000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.188 [2024-07-24 22:46:29.275200] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:31.188 #50 NEW cov: 12257 ft: 15144 corp: 31/377b lim: 45 exec/s: 50 rss: 73Mb L: 12/33 MS: 3 ShuffleBytes-ShuffleBytes-CrossOver- 00:07:31.188 [2024-07-24 22:46:29.315268] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00560256 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.188 [2024-07-24 22:46:29.315289] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:31.188 #51 NEW cov: 12257 ft: 15152 corp: 32/388b lim: 45 exec/s: 51 rss: 73Mb L: 11/33 MS: 1 ShuffleBytes- 00:07:31.188 [2024-07-24 22:46:29.355691] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000256 cdw11:00000002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.188 [2024-07-24 22:46:29.355713] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:31.188 [2024-07-24 22:46:29.355778] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:48484848 cdw11:48480002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.188 [2024-07-24 22:46:29.355790] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:31.188 [2024-07-24 22:46:29.355840] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:48484848 cdw11:48000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.188 [2024-07-24 22:46:29.355851] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:31.188 #52 NEW cov: 12257 ft: 15168 corp: 33/416b lim: 45 exec/s: 52 rss: 73Mb L: 28/33 MS: 1 InsertRepeatedBytes- 00:07:31.446 [2024-07-24 22:46:29.405889] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000256 cdw11:00000002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.446 [2024-07-24 22:46:29.405910] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:31.446 [2024-07-24 22:46:29.405962] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:56004848 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.446 [2024-07-24 22:46:29.405973] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:31.446 [2024-07-24 22:46:29.406023] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:48480048 cdw11:48000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.446 [2024-07-24 22:46:29.406034] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:31.446 #53 NEW cov: 12257 ft: 15183 corp: 34/444b lim: 45 exec/s: 53 rss: 73Mb L: 28/33 MS: 1 PersAutoDict- DE: "V\000\000\000\000\000\000\000"- 00:07:31.446 [2024-07-24 22:46:29.455851] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:000b0256 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.446 [2024-07-24 22:46:29.455872] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:31.446 [2024-07-24 22:46:29.455938] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:c1c10000 cdw11:c1c10006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.446 [2024-07-24 22:46:29.455953] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:31.446 #54 NEW cov: 12257 ft: 15196 corp: 35/469b lim: 45 exec/s: 54 rss: 73Mb L: 25/33 MS: 1 InsertRepeatedBytes- 00:07:31.446 [2024-07-24 22:46:29.505842] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000256 cdw11:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.446 [2024-07-24 22:46:29.505864] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:31.446 #55 NEW cov: 12257 ft: 15207 corp: 36/480b lim: 45 exec/s: 55 rss: 73Mb L: 11/33 MS: 1 ChangeBinInt- 00:07:31.446 [2024-07-24 22:46:29.545960] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:9c000200 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.446 [2024-07-24 22:46:29.545981] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:31.446 #56 NEW cov: 12257 ft: 15216 corp: 37/489b lim: 45 exec/s: 56 rss: 73Mb L: 9/33 MS: 1 ChangeByte- 00:07:31.446 [2024-07-24 22:46:29.596257] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.446 [2024-07-24 22:46:29.596279] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:31.446 [2024-07-24 22:46:29.596330] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:56001502 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.446 [2024-07-24 22:46:29.596340] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:31.446 #57 NEW cov: 12257 ft: 15291 corp: 38/510b lim: 45 exec/s: 57 rss: 73Mb L: 21/33 MS: 1 ChangeBinInt- 00:07:31.446 [2024-07-24 22:46:29.646750] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00560256 cdw11:fcfc0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.447 [2024-07-24 22:46:29.646773] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:31.447 [2024-07-24 22:46:29.646824] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:fcfcfcfc cdw11:fcfc0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.447 [2024-07-24 22:46:29.646835] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:31.447 [2024-07-24 22:46:29.646885] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:fcfcfcfc cdw11:fcfc0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.447 [2024-07-24 22:46:29.646911] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:31.447 [2024-07-24 22:46:29.646962] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:fcfcfcfc cdw11:fcfc0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.447 [2024-07-24 22:46:29.646972] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:31.706 #58 NEW cov: 12257 ft: 15610 corp: 39/551b lim: 45 exec/s: 58 rss: 74Mb L: 41/41 MS: 1 CopyPart- 00:07:31.706 [2024-07-24 22:46:29.706432] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:000b0256 cdw11:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.706 [2024-07-24 22:46:29.706454] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:31.706 #59 NEW cov: 12257 ft: 15614 corp: 40/562b lim: 45 exec/s: 59 rss: 74Mb L: 11/41 MS: 1 ChangeBinInt- 00:07:31.706 [2024-07-24 22:46:29.746505] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00ff0256 cdw11:04000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.706 [2024-07-24 22:46:29.746531] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:31.706 #60 NEW cov: 12257 ft: 15655 corp: 41/573b lim: 45 exec/s: 60 rss: 74Mb L: 11/41 MS: 1 CMP- DE: "\377\004"- 00:07:31.706 [2024-07-24 22:46:29.786674] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00010a0a cdw11:56220000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.706 [2024-07-24 22:46:29.786697] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:31.706 #61 NEW cov: 12257 ft: 15669 corp: 42/583b lim: 45 exec/s: 61 rss: 74Mb L: 10/41 MS: 1 EraseBytes- 00:07:31.706 [2024-07-24 22:46:29.836970] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000256 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.706 [2024-07-24 22:46:29.836992] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:31.706 [2024-07-24 22:46:29.837043] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000100 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.706 [2024-07-24 22:46:29.837054] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:31.706 #62 NEW cov: 12257 ft: 15676 corp: 43/602b lim: 45 exec/s: 31 rss: 74Mb L: 19/41 MS: 1 CMP- DE: "\000\000\000\000\001\000\000\000"- 00:07:31.706 #62 DONE cov: 12257 ft: 15676 corp: 43/602b lim: 45 exec/s: 31 rss: 74Mb 00:07:31.706 ###### Recommended dictionary. ###### 00:07:31.706 "\001\"" # Uses: 4 00:07:31.706 "V\000\000\000\000\000\000\000" # Uses: 4 00:07:31.706 "\377\004" # Uses: 0 00:07:31.706 "\000\000\000\000\001\000\000\000" # Uses: 0 00:07:31.706 ###### End of recommended dictionary. ###### 00:07:31.706 Done 62 runs in 2 second(s) 00:07:31.965 22:46:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_5.conf /var/tmp/suppress_nvmf_fuzz 00:07:31.965 22:46:29 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:31.965 22:46:29 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:31.965 22:46:29 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 6 1 0x1 00:07:31.965 22:46:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=6 00:07:31.965 22:46:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:31.965 22:46:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:31.965 22:46:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 00:07:31.965 22:46:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_6.conf 00:07:31.965 22:46:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:31.965 22:46:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:31.965 22:46:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 6 00:07:31.965 22:46:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4406 00:07:31.965 22:46:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 00:07:31.965 22:46:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4406' 00:07:31.965 22:46:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4406"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:31.965 22:46:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:31.965 22:46:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:31.965 22:46:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4406' -c /tmp/fuzz_json_6.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 -Z 6 00:07:31.965 [2024-07-24 22:46:30.021570] Starting SPDK v24.09-pre git sha1 f41dbc235 / DPDK 24.03.0 initialization... 00:07:31.965 [2024-07-24 22:46:30.021637] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid481087 ] 00:07:31.965 EAL: No free 2048 kB hugepages reported on node 1 00:07:32.223 [2024-07-24 22:46:30.277035] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.223 [2024-07-24 22:46:30.353316] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.223 [2024-07-24 22:46:30.412251] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:32.223 [2024-07-24 22:46:30.428500] tcp.c:1007:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4406 *** 00:07:32.482 INFO: Running with entropic power schedule (0xFF, 100). 00:07:32.482 INFO: Seed: 3750489593 00:07:32.482 INFO: Loaded 1 modules (358985 inline 8-bit counters): 358985 [0x29c620c, 0x2a1dc55), 00:07:32.482 INFO: Loaded 1 PC tables (358985 PCs): 358985 [0x2a1dc58,0x2f980e8), 00:07:32.482 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 00:07:32.482 INFO: A corpus is not provided, starting from an empty corpus 00:07:32.482 #2 INITED exec/s: 0 rss: 64Mb 00:07:32.482 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:32.482 This may also happen if the target rejected all inputs we tried so far 00:07:32.482 [2024-07-24 22:46:30.496212] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:32.482 [2024-07-24 22:46:30.496248] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:32.482 [2024-07-24 22:46:30.496348] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:32.482 [2024-07-24 22:46:30.496362] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:32.482 [2024-07-24 22:46:30.496459] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:32.482 [2024-07-24 22:46:30.496473] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:32.482 [2024-07-24 22:46:30.496563] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:32.482 [2024-07-24 22:46:30.496577] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:32.482 [2024-07-24 22:46:30.496667] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:0000ff0a cdw11:00000000 00:07:32.482 [2024-07-24 22:46:30.496680] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:32.482 NEW_FUNC[1/698]: 0x48e990 in fuzz_admin_delete_io_completion_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:161 00:07:32.482 NEW_FUNC[2/698]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:32.482 #3 NEW cov: 11939 ft: 11948 corp: 2/11b lim: 10 exec/s: 0 rss: 70Mb L: 10/10 MS: 1 InsertRepeatedBytes- 00:07:32.482 [2024-07-24 22:46:30.655689] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a2e cdw11:00000000 00:07:32.482 [2024-07-24 22:46:30.655724] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:32.482 NEW_FUNC[1/1]: 0x1ac1030 in sock_group_impl_poll_count /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/sock/sock.c:749 00:07:32.482 #5 NEW cov: 12060 ft: 12826 corp: 3/13b lim: 10 exec/s: 0 rss: 71Mb L: 2/10 MS: 2 CrossOver-InsertByte- 00:07:32.741 [2024-07-24 22:46:30.706130] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a2e cdw11:00000000 00:07:32.741 [2024-07-24 22:46:30.706157] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:32.741 #6 NEW cov: 12066 ft: 13131 corp: 4/15b lim: 10 exec/s: 0 rss: 71Mb L: 2/10 MS: 1 ShuffleBytes- 00:07:32.741 [2024-07-24 22:46:30.767734] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:32.741 [2024-07-24 22:46:30.767757] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:32.741 [2024-07-24 22:46:30.767845] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:32.741 [2024-07-24 22:46:30.767859] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:32.741 [2024-07-24 22:46:30.767941] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:32.741 [2024-07-24 22:46:30.767954] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:32.741 [2024-07-24 22:46:30.768040] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:32.741 [2024-07-24 22:46:30.768053] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:32.741 [2024-07-24 22:46:30.768148] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:32.741 [2024-07-24 22:46:30.768162] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:32.741 #7 NEW cov: 12151 ft: 13339 corp: 5/25b lim: 10 exec/s: 0 rss: 71Mb L: 10/10 MS: 1 CopyPart- 00:07:32.741 [2024-07-24 22:46:30.838300] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:32.741 [2024-07-24 22:46:30.838324] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:32.741 [2024-07-24 22:46:30.838413] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000fffd cdw11:00000000 00:07:32.741 [2024-07-24 22:46:30.838428] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:32.741 [2024-07-24 22:46:30.838519] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:32.741 [2024-07-24 22:46:30.838532] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:32.741 [2024-07-24 22:46:30.838615] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:32.742 [2024-07-24 22:46:30.838627] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:32.742 [2024-07-24 22:46:30.838708] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:32.742 [2024-07-24 22:46:30.838719] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:32.742 #8 NEW cov: 12151 ft: 13503 corp: 6/35b lim: 10 exec/s: 0 rss: 71Mb L: 10/10 MS: 1 ChangeBit- 00:07:32.742 [2024-07-24 22:46:30.898259] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a2e cdw11:00000000 00:07:32.742 [2024-07-24 22:46:30.898283] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:32.742 [2024-07-24 22:46:30.898364] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00004848 cdw11:00000000 00:07:32.742 [2024-07-24 22:46:30.898380] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:32.742 [2024-07-24 22:46:30.898465] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00004848 cdw11:00000000 00:07:32.742 [2024-07-24 22:46:30.898477] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:32.742 [2024-07-24 22:46:30.898563] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00004848 cdw11:00000000 00:07:32.742 [2024-07-24 22:46:30.898575] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:32.742 #9 NEW cov: 12151 ft: 13605 corp: 7/43b lim: 10 exec/s: 0 rss: 71Mb L: 8/10 MS: 1 InsertRepeatedBytes- 00:07:33.000 [2024-07-24 22:46:30.958903] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:33.000 [2024-07-24 22:46:30.958927] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.000 [2024-07-24 22:46:30.959013] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:33.000 [2024-07-24 22:46:30.959028] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:33.000 [2024-07-24 22:46:30.959112] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:33.000 [2024-07-24 22:46:30.959125] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:33.000 [2024-07-24 22:46:30.959203] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:33.000 [2024-07-24 22:46:30.959217] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:33.000 [2024-07-24 22:46:30.959306] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:0000ff0a cdw11:00000000 00:07:33.000 [2024-07-24 22:46:30.959318] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:33.000 #10 NEW cov: 12151 ft: 13670 corp: 8/53b lim: 10 exec/s: 0 rss: 72Mb L: 10/10 MS: 1 CopyPart- 00:07:33.000 [2024-07-24 22:46:31.008865] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:33.000 [2024-07-24 22:46:31.008890] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.000 [2024-07-24 22:46:31.008983] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000fffd cdw11:00000000 00:07:33.000 [2024-07-24 22:46:31.008999] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:33.000 [2024-07-24 22:46:31.009093] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:33.000 [2024-07-24 22:46:31.009106] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:33.000 [2024-07-24 22:46:31.009198] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:33.000 [2024-07-24 22:46:31.009211] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:33.000 #11 NEW cov: 12151 ft: 13693 corp: 9/61b lim: 10 exec/s: 0 rss: 72Mb L: 8/10 MS: 1 EraseBytes- 00:07:33.000 [2024-07-24 22:46:31.079423] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:33.000 [2024-07-24 22:46:31.079446] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.000 [2024-07-24 22:46:31.079543] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00004000 cdw11:00000000 00:07:33.000 [2024-07-24 22:46:31.079556] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:33.000 [2024-07-24 22:46:31.079645] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:07:33.000 [2024-07-24 22:46:31.079657] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:33.000 [2024-07-24 22:46:31.079736] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:07:33.000 [2024-07-24 22:46:31.079749] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:33.000 [2024-07-24 22:46:31.079837] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 00:07:33.000 [2024-07-24 22:46:31.079849] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:33.000 #12 NEW cov: 12151 ft: 13711 corp: 10/71b lim: 10 exec/s: 0 rss: 72Mb L: 10/10 MS: 1 CMP- DE: "@\000\000\000\000\000\000\000"- 00:07:33.000 [2024-07-24 22:46:31.129475] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a40 cdw11:00000000 00:07:33.000 [2024-07-24 22:46:31.129500] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.000 [2024-07-24 22:46:31.129593] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:07:33.000 [2024-07-24 22:46:31.129606] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:33.000 [2024-07-24 22:46:31.129696] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:07:33.000 [2024-07-24 22:46:31.129708] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:33.000 [2024-07-24 22:46:31.129795] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:07:33.000 [2024-07-24 22:46:31.129808] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:33.000 [2024-07-24 22:46:31.129897] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:0000002e cdw11:00000000 00:07:33.000 [2024-07-24 22:46:31.129910] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:33.000 #13 NEW cov: 12151 ft: 13824 corp: 11/81b lim: 10 exec/s: 0 rss: 72Mb L: 10/10 MS: 1 PersAutoDict- DE: "@\000\000\000\000\000\000\000"- 00:07:33.000 [2024-07-24 22:46:31.178870] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a05 cdw11:00000000 00:07:33.000 [2024-07-24 22:46:31.178896] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.000 [2024-07-24 22:46:31.178997] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000002e cdw11:00000000 00:07:33.000 [2024-07-24 22:46:31.179013] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:33.000 #14 NEW cov: 12151 ft: 14010 corp: 12/85b lim: 10 exec/s: 0 rss: 72Mb L: 4/10 MS: 1 CMP- DE: "\005\000"- 00:07:33.258 [2024-07-24 22:46:31.230025] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00008df5 cdw11:00000000 00:07:33.258 [2024-07-24 22:46:31.230049] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.259 [2024-07-24 22:46:31.230135] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:000082f4 cdw11:00000000 00:07:33.259 [2024-07-24 22:46:31.230147] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:33.259 [2024-07-24 22:46:31.230231] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00009163 cdw11:00000000 00:07:33.259 [2024-07-24 22:46:31.230244] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:33.259 [2024-07-24 22:46:31.230324] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00001700 cdw11:00000000 00:07:33.259 [2024-07-24 22:46:31.230336] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:33.259 [2024-07-24 22:46:31.230418] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:33.259 [2024-07-24 22:46:31.230432] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:33.259 #15 NEW cov: 12151 ft: 14024 corp: 13/95b lim: 10 exec/s: 0 rss: 72Mb L: 10/10 MS: 1 CMP- DE: "\215\365\202\364\221c\027\000"- 00:07:33.259 [2024-07-24 22:46:31.279920] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000ff2b cdw11:00000000 00:07:33.259 [2024-07-24 22:46:31.279944] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.259 [2024-07-24 22:46:31.280039] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:33.259 [2024-07-24 22:46:31.280054] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:33.259 [2024-07-24 22:46:31.280139] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000fdff cdw11:00000000 00:07:33.259 [2024-07-24 22:46:31.280152] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:33.259 [2024-07-24 22:46:31.280241] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:33.259 [2024-07-24 22:46:31.280254] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:33.259 #16 NEW cov: 12151 ft: 14036 corp: 14/104b lim: 10 exec/s: 0 rss: 72Mb L: 9/10 MS: 1 InsertByte- 00:07:33.259 [2024-07-24 22:46:31.349548] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000ff2b cdw11:00000000 00:07:33.259 [2024-07-24 22:46:31.349572] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.259 [2024-07-24 22:46:31.349660] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:33.259 [2024-07-24 22:46:31.349674] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:33.259 NEW_FUNC[1/1]: 0x1a887d0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:07:33.259 #17 NEW cov: 12174 ft: 14086 corp: 15/109b lim: 10 exec/s: 0 rss: 72Mb L: 5/10 MS: 1 EraseBytes- 00:07:33.259 [2024-07-24 22:46:31.419509] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00008a2e cdw11:00000000 00:07:33.259 [2024-07-24 22:46:31.419534] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.259 #18 NEW cov: 12174 ft: 14106 corp: 16/111b lim: 10 exec/s: 0 rss: 72Mb L: 2/10 MS: 1 ChangeBit- 00:07:33.518 [2024-07-24 22:46:31.470610] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 00:07:33.518 [2024-07-24 22:46:31.470637] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.518 [2024-07-24 22:46:31.470729] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:07:33.518 [2024-07-24 22:46:31.470741] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:33.518 [2024-07-24 22:46:31.470826] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:07:33.518 [2024-07-24 22:46:31.470838] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:33.518 [2024-07-24 22:46:31.470925] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:07:33.518 [2024-07-24 22:46:31.470939] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:33.518 #19 NEW cov: 12174 ft: 14121 corp: 17/120b lim: 10 exec/s: 19 rss: 72Mb L: 9/10 MS: 1 InsertRepeatedBytes- 00:07:33.518 [2024-07-24 22:46:31.520870] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:33.518 [2024-07-24 22:46:31.520895] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.518 [2024-07-24 22:46:31.520980] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:33.518 [2024-07-24 22:46:31.520992] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:33.518 [2024-07-24 22:46:31.521089] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:33.518 [2024-07-24 22:46:31.521102] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:33.518 [2024-07-24 22:46:31.521193] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000ff0a cdw11:00000000 00:07:33.518 [2024-07-24 22:46:31.521206] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:33.518 #20 NEW cov: 12174 ft: 14167 corp: 18/128b lim: 10 exec/s: 20 rss: 72Mb L: 8/10 MS: 1 EraseBytes- 00:07:33.518 [2024-07-24 22:46:31.571437] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:33.518 [2024-07-24 22:46:31.571460] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.518 [2024-07-24 22:46:31.571545] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:33.518 [2024-07-24 22:46:31.571558] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:33.518 [2024-07-24 22:46:31.571646] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ffa5 cdw11:00000000 00:07:33.518 [2024-07-24 22:46:31.571658] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:33.518 [2024-07-24 22:46:31.571747] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:33.518 [2024-07-24 22:46:31.571760] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:33.518 [2024-07-24 22:46:31.571844] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:0000ff0a cdw11:00000000 00:07:33.518 [2024-07-24 22:46:31.571856] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:33.518 #21 NEW cov: 12174 ft: 14169 corp: 19/138b lim: 10 exec/s: 21 rss: 72Mb L: 10/10 MS: 1 ChangeByte- 00:07:33.518 [2024-07-24 22:46:31.621663] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:33.518 [2024-07-24 22:46:31.621687] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.518 [2024-07-24 22:46:31.621774] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000fbfd cdw11:00000000 00:07:33.518 [2024-07-24 22:46:31.621787] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:33.518 [2024-07-24 22:46:31.621875] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:33.518 [2024-07-24 22:46:31.621888] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:33.518 [2024-07-24 22:46:31.621978] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:33.518 [2024-07-24 22:46:31.621989] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:33.518 [2024-07-24 22:46:31.622078] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:33.518 [2024-07-24 22:46:31.622087] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:33.518 #22 NEW cov: 12174 ft: 14186 corp: 20/148b lim: 10 exec/s: 22 rss: 72Mb L: 10/10 MS: 1 ChangeBit- 00:07:33.518 [2024-07-24 22:46:31.671871] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:33.518 [2024-07-24 22:46:31.671895] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.518 [2024-07-24 22:46:31.671984] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:33.518 [2024-07-24 22:46:31.671997] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:33.518 [2024-07-24 22:46:31.672080] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:33.518 [2024-07-24 22:46:31.672092] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:33.518 [2024-07-24 22:46:31.672190] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:33.518 [2024-07-24 22:46:31.672203] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:33.518 [2024-07-24 22:46:31.672285] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:0000ff0a cdw11:00000000 00:07:33.518 [2024-07-24 22:46:31.672297] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:33.518 #23 NEW cov: 12174 ft: 14201 corp: 21/158b lim: 10 exec/s: 23 rss: 72Mb L: 10/10 MS: 1 ShuffleBytes- 00:07:33.518 [2024-07-24 22:46:31.722091] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:33.518 [2024-07-24 22:46:31.722114] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.518 [2024-07-24 22:46:31.722194] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:33.518 [2024-07-24 22:46:31.722207] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:33.518 [2024-07-24 22:46:31.722301] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:33.518 [2024-07-24 22:46:31.722318] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:33.518 [2024-07-24 22:46:31.722403] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:33.518 [2024-07-24 22:46:31.722416] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:33.518 [2024-07-24 22:46:31.722504] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:33.518 [2024-07-24 22:46:31.722519] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:33.777 #24 NEW cov: 12174 ft: 14202 corp: 22/168b lim: 10 exec/s: 24 rss: 72Mb L: 10/10 MS: 1 CopyPart- 00:07:33.777 [2024-07-24 22:46:31.781897] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 00:07:33.777 [2024-07-24 22:46:31.781922] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.777 [2024-07-24 22:46:31.782013] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00003000 cdw11:00000000 00:07:33.777 [2024-07-24 22:46:31.782025] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:33.777 [2024-07-24 22:46:31.782114] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:07:33.777 [2024-07-24 22:46:31.782129] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:33.777 [2024-07-24 22:46:31.782214] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:07:33.777 [2024-07-24 22:46:31.782228] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:33.777 #25 NEW cov: 12174 ft: 14259 corp: 23/177b lim: 10 exec/s: 25 rss: 72Mb L: 9/10 MS: 1 ChangeByte- 00:07:33.777 [2024-07-24 22:46:31.842452] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000fdff cdw11:00000000 00:07:33.777 [2024-07-24 22:46:31.842475] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.777 [2024-07-24 22:46:31.842557] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000fbfd cdw11:00000000 00:07:33.777 [2024-07-24 22:46:31.842571] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:33.777 [2024-07-24 22:46:31.842657] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:33.777 [2024-07-24 22:46:31.842670] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:33.777 [2024-07-24 22:46:31.842750] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:33.777 [2024-07-24 22:46:31.842763] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:33.777 [2024-07-24 22:46:31.842854] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:33.777 [2024-07-24 22:46:31.842867] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:33.777 #26 NEW cov: 12174 ft: 14269 corp: 24/187b lim: 10 exec/s: 26 rss: 72Mb L: 10/10 MS: 1 ChangeBinInt- 00:07:33.777 [2024-07-24 22:46:31.902596] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 00:07:33.777 [2024-07-24 22:46:31.902620] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.777 [2024-07-24 22:46:31.902699] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00003000 cdw11:00000000 00:07:33.777 [2024-07-24 22:46:31.902712] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:33.777 [2024-07-24 22:46:31.902787] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:07:33.777 [2024-07-24 22:46:31.902798] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:33.777 [2024-07-24 22:46:31.902885] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:07:33.777 [2024-07-24 22:46:31.902899] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:33.777 [2024-07-24 22:46:31.902984] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:00002e0a cdw11:00000000 00:07:33.777 [2024-07-24 22:46:31.902997] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:33.777 #27 NEW cov: 12174 ft: 14283 corp: 25/197b lim: 10 exec/s: 27 rss: 72Mb L: 10/10 MS: 1 CrossOver- 00:07:33.777 [2024-07-24 22:46:31.962354] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a2e cdw11:00000000 00:07:33.777 [2024-07-24 22:46:31.962377] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.777 [2024-07-24 22:46:31.962457] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00004849 cdw11:00000000 00:07:33.777 [2024-07-24 22:46:31.962470] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:33.777 [2024-07-24 22:46:31.962543] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00004848 cdw11:00000000 00:07:33.777 [2024-07-24 22:46:31.962556] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:33.777 [2024-07-24 22:46:31.962648] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00004848 cdw11:00000000 00:07:33.777 [2024-07-24 22:46:31.962660] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:34.036 #28 NEW cov: 12174 ft: 14372 corp: 26/205b lim: 10 exec/s: 28 rss: 72Mb L: 8/10 MS: 1 ChangeBit- 00:07:34.036 [2024-07-24 22:46:32.022740] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:34.036 [2024-07-24 22:46:32.022763] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:34.036 [2024-07-24 22:46:32.022854] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ff23 cdw11:00000000 00:07:34.036 [2024-07-24 22:46:32.022867] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:34.036 [2024-07-24 22:46:32.022947] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:34.036 [2024-07-24 22:46:32.022961] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:34.036 [2024-07-24 22:46:32.023048] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:34.036 [2024-07-24 22:46:32.023059] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:34.036 #29 NEW cov: 12174 ft: 14398 corp: 27/214b lim: 10 exec/s: 29 rss: 72Mb L: 9/10 MS: 1 InsertByte- 00:07:34.036 [2024-07-24 22:46:32.083278] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:34.036 [2024-07-24 22:46:32.083303] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:34.036 [2024-07-24 22:46:32.083392] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:34.036 [2024-07-24 22:46:32.083404] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:34.036 [2024-07-24 22:46:32.083484] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:34.036 [2024-07-24 22:46:32.083496] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:34.036 [2024-07-24 22:46:32.083583] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:34.036 [2024-07-24 22:46:32.083595] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:34.036 [2024-07-24 22:46:32.083676] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:000032ff cdw11:00000000 00:07:34.036 [2024-07-24 22:46:32.083688] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:34.036 #30 NEW cov: 12174 ft: 14409 corp: 28/224b lim: 10 exec/s: 30 rss: 72Mb L: 10/10 MS: 1 ChangeByte- 00:07:34.036 [2024-07-24 22:46:32.143598] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:34.036 [2024-07-24 22:46:32.143622] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:34.036 [2024-07-24 22:46:32.143714] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:34.036 [2024-07-24 22:46:32.143727] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:34.036 [2024-07-24 22:46:32.143812] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:34.036 [2024-07-24 22:46:32.143825] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:34.036 [2024-07-24 22:46:32.143904] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:34.036 [2024-07-24 22:46:32.143916] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:34.036 [2024-07-24 22:46:32.144005] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:34.036 [2024-07-24 22:46:32.144017] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:34.036 #31 NEW cov: 12174 ft: 14422 corp: 29/234b lim: 10 exec/s: 31 rss: 72Mb L: 10/10 MS: 1 CopyPart- 00:07:34.036 [2024-07-24 22:46:32.194023] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000aff cdw11:00000000 00:07:34.036 [2024-07-24 22:46:32.194047] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:34.036 [2024-07-24 22:46:32.194128] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00004000 cdw11:00000000 00:07:34.036 [2024-07-24 22:46:32.194142] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:34.036 [2024-07-24 22:46:32.194229] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:07:34.036 [2024-07-24 22:46:32.194242] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:34.036 [2024-07-24 22:46:32.194332] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:07:34.036 [2024-07-24 22:46:32.194345] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:34.036 [2024-07-24 22:46:32.194424] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 00:07:34.036 [2024-07-24 22:46:32.194438] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:34.036 #35 NEW cov: 12174 ft: 14440 corp: 30/244b lim: 10 exec/s: 35 rss: 72Mb L: 10/10 MS: 4 EraseBytes-ShuffleBytes-CopyPart-CrossOver- 00:07:34.295 [2024-07-24 22:46:32.243904] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:34.295 [2024-07-24 22:46:32.243928] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:34.295 [2024-07-24 22:46:32.244010] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:34.295 [2024-07-24 22:46:32.244023] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:34.295 [2024-07-24 22:46:32.244102] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ff0a cdw11:00000000 00:07:34.295 [2024-07-24 22:46:32.244114] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:34.295 #36 NEW cov: 12174 ft: 14618 corp: 31/250b lim: 10 exec/s: 36 rss: 72Mb L: 6/10 MS: 1 EraseBytes- 00:07:34.295 [2024-07-24 22:46:32.294626] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:34.295 [2024-07-24 22:46:32.294652] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:34.295 [2024-07-24 22:46:32.294743] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:34.295 [2024-07-24 22:46:32.294755] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:34.295 [2024-07-24 22:46:32.294840] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:34.295 [2024-07-24 22:46:32.294852] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:34.295 [2024-07-24 22:46:32.294938] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:34.295 [2024-07-24 22:46:32.294951] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:34.295 [2024-07-24 22:46:32.295038] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:000032ff cdw11:00000000 00:07:34.295 [2024-07-24 22:46:32.295051] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:34.295 #37 NEW cov: 12174 ft: 14636 corp: 32/260b lim: 10 exec/s: 37 rss: 73Mb L: 10/10 MS: 1 ShuffleBytes- 00:07:34.295 [2024-07-24 22:46:32.363992] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000eed1 cdw11:00000000 00:07:34.295 [2024-07-24 22:46:32.364017] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:34.295 #38 NEW cov: 12174 ft: 14644 corp: 33/262b lim: 10 exec/s: 38 rss: 73Mb L: 2/10 MS: 1 ChangeBinInt- 00:07:34.295 [2024-07-24 22:46:32.415443] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 00:07:34.295 [2024-07-24 22:46:32.415473] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:34.295 [2024-07-24 22:46:32.415561] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00003000 cdw11:00000000 00:07:34.295 [2024-07-24 22:46:32.415576] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:34.295 [2024-07-24 22:46:32.415661] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:000000fd cdw11:00000000 00:07:34.295 [2024-07-24 22:46:32.415675] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:34.296 [2024-07-24 22:46:32.415756] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:07:34.296 [2024-07-24 22:46:32.415770] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:34.296 [2024-07-24 22:46:32.415860] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:00002e0a cdw11:00000000 00:07:34.296 [2024-07-24 22:46:32.415873] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:34.296 #39 NEW cov: 12174 ft: 14697 corp: 34/272b lim: 10 exec/s: 39 rss: 73Mb L: 10/10 MS: 1 CrossOver- 00:07:34.296 [2024-07-24 22:46:32.485560] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000fdf7 cdw11:00000000 00:07:34.296 [2024-07-24 22:46:32.485585] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:34.296 [2024-07-24 22:46:32.485669] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000fbfd cdw11:00000000 00:07:34.296 [2024-07-24 22:46:32.485683] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:34.296 [2024-07-24 22:46:32.485762] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:34.296 [2024-07-24 22:46:32.485774] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:34.296 [2024-07-24 22:46:32.485863] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:34.296 [2024-07-24 22:46:32.485876] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:34.296 [2024-07-24 22:46:32.485959] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:34.296 [2024-07-24 22:46:32.485973] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:34.554 #40 NEW cov: 12174 ft: 14699 corp: 35/282b lim: 10 exec/s: 20 rss: 73Mb L: 10/10 MS: 1 ChangeBit- 00:07:34.554 #40 DONE cov: 12174 ft: 14699 corp: 35/282b lim: 10 exec/s: 20 rss: 73Mb 00:07:34.554 ###### Recommended dictionary. ###### 00:07:34.554 "@\000\000\000\000\000\000\000" # Uses: 1 00:07:34.554 "\005\000" # Uses: 0 00:07:34.554 "\215\365\202\364\221c\027\000" # Uses: 0 00:07:34.554 ###### End of recommended dictionary. ###### 00:07:34.554 Done 40 runs in 2 second(s) 00:07:34.554 22:46:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_6.conf /var/tmp/suppress_nvmf_fuzz 00:07:34.554 22:46:32 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:34.554 22:46:32 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:34.554 22:46:32 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 7 1 0x1 00:07:34.554 22:46:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=7 00:07:34.554 22:46:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:34.554 22:46:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:34.554 22:46:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 00:07:34.554 22:46:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_7.conf 00:07:34.554 22:46:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:34.554 22:46:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:34.554 22:46:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 7 00:07:34.554 22:46:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4407 00:07:34.554 22:46:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 00:07:34.554 22:46:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4407' 00:07:34.554 22:46:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4407"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:34.554 22:46:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:34.554 22:46:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:34.554 22:46:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4407' -c /tmp/fuzz_json_7.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 -Z 7 00:07:34.554 [2024-07-24 22:46:32.686147] Starting SPDK v24.09-pre git sha1 f41dbc235 / DPDK 24.03.0 initialization... 00:07:34.554 [2024-07-24 22:46:32.686225] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid481529 ] 00:07:34.554 EAL: No free 2048 kB hugepages reported on node 1 00:07:34.813 [2024-07-24 22:46:32.940247] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.161 [2024-07-24 22:46:33.027741] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.161 [2024-07-24 22:46:33.086141] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:35.161 [2024-07-24 22:46:33.102386] tcp.c:1007:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4407 *** 00:07:35.161 INFO: Running with entropic power schedule (0xFF, 100). 00:07:35.161 INFO: Seed: 2129520548 00:07:35.161 INFO: Loaded 1 modules (358985 inline 8-bit counters): 358985 [0x29c620c, 0x2a1dc55), 00:07:35.161 INFO: Loaded 1 PC tables (358985 PCs): 358985 [0x2a1dc58,0x2f980e8), 00:07:35.161 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 00:07:35.161 INFO: A corpus is not provided, starting from an empty corpus 00:07:35.161 #2 INITED exec/s: 0 rss: 63Mb 00:07:35.161 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:35.161 This may also happen if the target rejected all inputs we tried so far 00:07:35.161 [2024-07-24 22:46:33.147024] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a32 cdw11:00000000 00:07:35.161 [2024-07-24 22:46:33.147057] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:35.161 NEW_FUNC[1/695]: 0x48f380 in fuzz_admin_delete_io_submission_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:172 00:07:35.161 NEW_FUNC[2/695]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:35.161 #3 NEW cov: 11916 ft: 11908 corp: 2/3b lim: 10 exec/s: 0 rss: 70Mb L: 2/2 MS: 1 InsertByte- 00:07:35.161 [2024-07-24 22:46:33.327517] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a15 cdw11:00000000 00:07:35.161 [2024-07-24 22:46:33.327557] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:35.161 [2024-07-24 22:46:33.327606] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00001515 cdw11:00000000 00:07:35.161 [2024-07-24 22:46:33.327619] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:35.161 [2024-07-24 22:46:33.327643] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00001532 cdw11:00000000 00:07:35.161 [2024-07-24 22:46:33.327656] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:35.440 NEW_FUNC[1/4]: 0x161b720 in nvme_ctrlr_process_init /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_ctrlr.c:3926 00:07:35.440 NEW_FUNC[2/4]: 0x17f5a60 in spdk_nvme_probe_poll_async /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme.c:1565 00:07:35.440 #4 NEW cov: 12060 ft: 12691 corp: 3/9b lim: 10 exec/s: 0 rss: 71Mb L: 6/6 MS: 1 InsertRepeatedBytes- 00:07:35.440 [2024-07-24 22:46:33.417637] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:07:35.440 [2024-07-24 22:46:33.417670] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:35.440 #5 NEW cov: 12066 ft: 12980 corp: 4/11b lim: 10 exec/s: 0 rss: 71Mb L: 2/6 MS: 1 CopyPart- 00:07:35.440 [2024-07-24 22:46:33.477918] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 00:07:35.440 [2024-07-24 22:46:33.477946] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:35.440 [2024-07-24 22:46:33.477988] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:07:35.440 [2024-07-24 22:46:33.478001] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:35.440 [2024-07-24 22:46:33.478025] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000015 cdw11:00000000 00:07:35.440 [2024-07-24 22:46:33.478037] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:35.440 [2024-07-24 22:46:33.478061] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00001515 cdw11:00000000 00:07:35.440 [2024-07-24 22:46:33.478078] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:35.440 [2024-07-24 22:46:33.478103] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:00001532 cdw11:00000000 00:07:35.440 [2024-07-24 22:46:33.478115] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:35.440 #6 NEW cov: 12151 ft: 13495 corp: 5/21b lim: 10 exec/s: 0 rss: 71Mb L: 10/10 MS: 1 InsertRepeatedBytes- 00:07:35.440 [2024-07-24 22:46:33.567997] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000f7cd cdw11:00000000 00:07:35.440 [2024-07-24 22:46:33.568027] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:35.440 #7 NEW cov: 12151 ft: 13698 corp: 6/23b lim: 10 exec/s: 0 rss: 71Mb L: 2/10 MS: 1 ChangeBinInt- 00:07:35.440 [2024-07-24 22:46:33.618191] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a15 cdw11:00000000 00:07:35.440 [2024-07-24 22:46:33.618218] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:35.440 [2024-07-24 22:46:33.618261] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00001515 cdw11:00000000 00:07:35.440 [2024-07-24 22:46:33.618274] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:35.440 [2024-07-24 22:46:33.618302] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00001532 cdw11:00000000 00:07:35.440 [2024-07-24 22:46:33.618314] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:35.702 #8 NEW cov: 12151 ft: 13892 corp: 7/29b lim: 10 exec/s: 0 rss: 71Mb L: 6/10 MS: 1 ShuffleBytes- 00:07:35.702 [2024-07-24 22:46:33.678260] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a15 cdw11:00000000 00:07:35.702 [2024-07-24 22:46:33.678287] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:35.702 #9 NEW cov: 12151 ft: 13985 corp: 8/31b lim: 10 exec/s: 0 rss: 71Mb L: 2/10 MS: 1 CrossOver- 00:07:35.702 [2024-07-24 22:46:33.728480] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00007878 cdw11:00000000 00:07:35.702 [2024-07-24 22:46:33.728507] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:35.702 [2024-07-24 22:46:33.728549] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00007878 cdw11:00000000 00:07:35.702 [2024-07-24 22:46:33.728561] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:35.702 [2024-07-24 22:46:33.728585] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00007878 cdw11:00000000 00:07:35.702 [2024-07-24 22:46:33.728598] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:35.702 #10 NEW cov: 12151 ft: 14078 corp: 9/38b lim: 10 exec/s: 0 rss: 71Mb L: 7/10 MS: 1 InsertRepeatedBytes- 00:07:35.702 [2024-07-24 22:46:33.788562] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000f0cd cdw11:00000000 00:07:35.702 [2024-07-24 22:46:33.788590] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:35.702 #11 NEW cov: 12151 ft: 14115 corp: 10/40b lim: 10 exec/s: 0 rss: 71Mb L: 2/10 MS: 1 ChangeBinInt- 00:07:35.702 [2024-07-24 22:46:33.868833] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00001515 cdw11:00000000 00:07:35.702 [2024-07-24 22:46:33.868862] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:35.702 [2024-07-24 22:46:33.868889] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00001515 cdw11:00000000 00:07:35.702 [2024-07-24 22:46:33.868902] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:35.961 #12 NEW cov: 12151 ft: 14292 corp: 11/45b lim: 10 exec/s: 0 rss: 71Mb L: 5/10 MS: 1 EraseBytes- 00:07:35.961 [2024-07-24 22:46:33.928963] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000f76b cdw11:00000000 00:07:35.961 [2024-07-24 22:46:33.928991] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:35.961 #13 NEW cov: 12151 ft: 14313 corp: 12/47b lim: 10 exec/s: 0 rss: 71Mb L: 2/10 MS: 1 ChangeByte- 00:07:35.961 [2024-07-24 22:46:33.989157] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00001515 cdw11:00000000 00:07:35.961 [2024-07-24 22:46:33.989185] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:35.961 [2024-07-24 22:46:33.989212] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00001532 cdw11:00000000 00:07:35.961 [2024-07-24 22:46:33.989224] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:35.961 NEW_FUNC[1/1]: 0x1a887d0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:07:35.961 #14 NEW cov: 12168 ft: 14340 corp: 13/51b lim: 10 exec/s: 0 rss: 71Mb L: 4/10 MS: 1 EraseBytes- 00:07:35.961 [2024-07-24 22:46:34.069331] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a32 cdw11:00000000 00:07:35.961 [2024-07-24 22:46:34.069357] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:35.961 #15 NEW cov: 12168 ft: 14370 corp: 14/53b lim: 10 exec/s: 0 rss: 72Mb L: 2/10 MS: 1 ShuffleBytes- 00:07:35.961 [2024-07-24 22:46:34.119616] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00007878 cdw11:00000000 00:07:35.961 [2024-07-24 22:46:34.119643] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:35.961 [2024-07-24 22:46:34.119670] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00007878 cdw11:00000000 00:07:35.961 [2024-07-24 22:46:34.119682] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:35.961 [2024-07-24 22:46:34.119706] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00007878 cdw11:00000000 00:07:35.961 [2024-07-24 22:46:34.119718] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:35.961 [2024-07-24 22:46:34.119742] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00007878 cdw11:00000000 00:07:35.961 [2024-07-24 22:46:34.119754] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:35.961 [2024-07-24 22:46:34.119793] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:0000780a cdw11:00000000 00:07:35.961 [2024-07-24 22:46:34.119805] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:36.219 #16 NEW cov: 12168 ft: 14408 corp: 15/63b lim: 10 exec/s: 16 rss: 72Mb L: 10/10 MS: 1 CopyPart- 00:07:36.219 [2024-07-24 22:46:34.199642] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00004b00 cdw11:00000000 00:07:36.219 [2024-07-24 22:46:34.199669] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:36.219 [2024-07-24 22:46:34.199711] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:07:36.219 [2024-07-24 22:46:34.199723] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:36.219 #19 NEW cov: 12168 ft: 14433 corp: 16/67b lim: 10 exec/s: 19 rss: 72Mb L: 4/10 MS: 3 EraseBytes-ChangeBit-InsertRepeatedBytes- 00:07:36.219 [2024-07-24 22:46:34.279846] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:07:36.219 [2024-07-24 22:46:34.279872] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:36.219 #20 NEW cov: 12168 ft: 14443 corp: 17/69b lim: 10 exec/s: 20 rss: 72Mb L: 2/10 MS: 1 CopyPart- 00:07:36.219 [2024-07-24 22:46:34.329986] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000ab3 cdw11:00000000 00:07:36.219 [2024-07-24 22:46:34.330012] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:36.219 #21 NEW cov: 12168 ft: 14468 corp: 18/71b lim: 10 exec/s: 21 rss: 72Mb L: 2/10 MS: 1 ChangeByte- 00:07:36.219 [2024-07-24 22:46:34.410224] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:000002cd cdw11:00000000 00:07:36.219 [2024-07-24 22:46:34.410251] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:36.478 #22 NEW cov: 12168 ft: 14493 corp: 19/73b lim: 10 exec/s: 22 rss: 72Mb L: 2/10 MS: 1 ChangeBinInt- 00:07:36.478 [2024-07-24 22:46:34.490621] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 00:07:36.478 [2024-07-24 22:46:34.490648] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:36.478 [2024-07-24 22:46:34.490675] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:000000fe cdw11:00000000 00:07:36.478 [2024-07-24 22:46:34.490688] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:36.478 [2024-07-24 22:46:34.490711] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ffea cdw11:00000000 00:07:36.478 [2024-07-24 22:46:34.490724] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:36.478 [2024-07-24 22:46:34.490748] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000ea15 cdw11:00000000 00:07:36.478 [2024-07-24 22:46:34.490760] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:36.478 [2024-07-24 22:46:34.490783] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:00001532 cdw11:00000000 00:07:36.478 [2024-07-24 22:46:34.490795] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:36.478 #23 NEW cov: 12168 ft: 14560 corp: 20/83b lim: 10 exec/s: 23 rss: 72Mb L: 10/10 MS: 1 ChangeBinInt- 00:07:36.478 [2024-07-24 22:46:34.570828] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 00:07:36.478 [2024-07-24 22:46:34.570854] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:36.478 [2024-07-24 22:46:34.570882] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:07:36.478 [2024-07-24 22:46:34.570895] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:36.478 [2024-07-24 22:46:34.570918] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000015 cdw11:00000000 00:07:36.478 [2024-07-24 22:46:34.570930] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:36.478 [2024-07-24 22:46:34.570954] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00001515 cdw11:00000000 00:07:36.478 [2024-07-24 22:46:34.570966] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:36.478 [2024-07-24 22:46:34.570989] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:00001531 cdw11:00000000 00:07:36.478 [2024-07-24 22:46:34.571000] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:36.478 [2024-07-24 22:46:34.620895] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 00:07:36.478 [2024-07-24 22:46:34.620922] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:36.478 [2024-07-24 22:46:34.620964] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:07:36.478 [2024-07-24 22:46:34.620977] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:36.478 [2024-07-24 22:46:34.621001] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000015 cdw11:00000000 00:07:36.478 [2024-07-24 22:46:34.621013] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:36.478 [2024-07-24 22:46:34.621041] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00001515 cdw11:00000000 00:07:36.478 [2024-07-24 22:46:34.621054] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:36.478 [2024-07-24 22:46:34.621083] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:00001531 cdw11:00000000 00:07:36.478 [2024-07-24 22:46:34.621095] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:36.478 #25 NEW cov: 12168 ft: 14572 corp: 21/93b lim: 10 exec/s: 25 rss: 72Mb L: 10/10 MS: 2 ChangeASCIIInt-ShuffleBytes- 00:07:36.478 [2024-07-24 22:46:34.680911] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a02 cdw11:00000000 00:07:36.478 [2024-07-24 22:46:34.680939] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:36.736 #26 NEW cov: 12168 ft: 14616 corp: 22/96b lim: 10 exec/s: 26 rss: 72Mb L: 3/10 MS: 1 CrossOver- 00:07:36.736 [2024-07-24 22:46:34.731117] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00004b00 cdw11:00000000 00:07:36.736 [2024-07-24 22:46:34.731144] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:36.737 [2024-07-24 22:46:34.731187] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00006200 cdw11:00000000 00:07:36.737 [2024-07-24 22:46:34.731199] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:36.737 #27 NEW cov: 12168 ft: 14632 corp: 23/100b lim: 10 exec/s: 27 rss: 72Mb L: 4/10 MS: 1 ChangeByte- 00:07:36.737 [2024-07-24 22:46:34.811434] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a32 cdw11:00000000 00:07:36.737 [2024-07-24 22:46:34.811461] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:36.737 [2024-07-24 22:46:34.811488] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:07:36.737 [2024-07-24 22:46:34.811500] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:36.737 [2024-07-24 22:46:34.811524] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000015 cdw11:00000000 00:07:36.737 [2024-07-24 22:46:34.811536] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:36.737 [2024-07-24 22:46:34.811560] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00001515 cdw11:00000000 00:07:36.737 [2024-07-24 22:46:34.811571] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:36.737 [2024-07-24 22:46:34.811594] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:00001531 cdw11:00000000 00:07:36.737 [2024-07-24 22:46:34.811606] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:36.737 #28 NEW cov: 12168 ft: 14678 corp: 24/110b lim: 10 exec/s: 28 rss: 72Mb L: 10/10 MS: 1 ChangeByte- 00:07:36.737 [2024-07-24 22:46:34.891477] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000acd cdw11:00000000 00:07:36.737 [2024-07-24 22:46:34.891505] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:36.996 #29 NEW cov: 12168 ft: 14708 corp: 25/112b lim: 10 exec/s: 29 rss: 72Mb L: 2/10 MS: 1 EraseBytes- 00:07:36.996 [2024-07-24 22:46:34.971722] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00004b00 cdw11:00000000 00:07:36.996 [2024-07-24 22:46:34.971754] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:36.996 [2024-07-24 22:46:34.971780] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00006221 cdw11:00000000 00:07:36.996 [2024-07-24 22:46:34.971793] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:36.996 #30 NEW cov: 12175 ft: 14726 corp: 26/117b lim: 10 exec/s: 30 rss: 72Mb L: 5/10 MS: 1 InsertByte- 00:07:36.996 [2024-07-24 22:46:35.051852] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000f03d cdw11:00000000 00:07:36.996 [2024-07-24 22:46:35.051880] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:36.996 #31 NEW cov: 12175 ft: 14754 corp: 27/119b lim: 10 exec/s: 31 rss: 72Mb L: 2/10 MS: 1 ChangeByte- 00:07:36.996 [2024-07-24 22:46:35.101985] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000026b cdw11:00000000 00:07:36.996 [2024-07-24 22:46:35.102011] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:36.996 #32 pulse cov: 12175 ft: 14767 corp: 27/119b lim: 10 exec/s: 16 rss: 72Mb 00:07:36.996 #32 NEW cov: 12175 ft: 14767 corp: 28/121b lim: 10 exec/s: 16 rss: 72Mb L: 2/10 MS: 1 ChangeBinInt- 00:07:36.996 #32 DONE cov: 12175 ft: 14767 corp: 28/121b lim: 10 exec/s: 16 rss: 72Mb 00:07:36.996 Done 32 runs in 2 second(s) 00:07:37.255 22:46:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_7.conf /var/tmp/suppress_nvmf_fuzz 00:07:37.255 22:46:35 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:37.255 22:46:35 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:37.255 22:46:35 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 8 1 0x1 00:07:37.255 22:46:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=8 00:07:37.255 22:46:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:37.255 22:46:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:37.255 22:46:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 00:07:37.255 22:46:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_8.conf 00:07:37.255 22:46:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:37.255 22:46:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:37.255 22:46:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 8 00:07:37.255 22:46:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4408 00:07:37.255 22:46:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 00:07:37.255 22:46:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4408' 00:07:37.255 22:46:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4408"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:37.255 22:46:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:37.255 22:46:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:37.255 22:46:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4408' -c /tmp/fuzz_json_8.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 -Z 8 00:07:37.255 [2024-07-24 22:46:35.303936] Starting SPDK v24.09-pre git sha1 f41dbc235 / DPDK 24.03.0 initialization... 00:07:37.255 [2024-07-24 22:46:35.303993] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid481944 ] 00:07:37.255 EAL: No free 2048 kB hugepages reported on node 1 00:07:37.514 [2024-07-24 22:46:35.553199] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.514 [2024-07-24 22:46:35.634458] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.514 [2024-07-24 22:46:35.693003] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:37.514 [2024-07-24 22:46:35.709251] tcp.c:1007:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4408 *** 00:07:37.773 INFO: Running with entropic power schedule (0xFF, 100). 00:07:37.773 INFO: Seed: 441551880 00:07:37.773 INFO: Loaded 1 modules (358985 inline 8-bit counters): 358985 [0x29c620c, 0x2a1dc55), 00:07:37.773 INFO: Loaded 1 PC tables (358985 PCs): 358985 [0x2a1dc58,0x2f980e8), 00:07:37.773 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 00:07:37.773 INFO: A corpus is not provided, starting from an empty corpus 00:07:37.773 [2024-07-24 22:46:35.764616] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:37.773 [2024-07-24 22:46:35.764645] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.773 #2 INITED cov: 11975 ft: 11974 corp: 1/1b exec/s: 0 rss: 70Mb 00:07:37.773 [2024-07-24 22:46:35.805048] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:37.773 [2024-07-24 22:46:35.805071] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.773 [2024-07-24 22:46:35.805143] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:37.773 [2024-07-24 22:46:35.805154] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:37.773 [2024-07-24 22:46:35.805205] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:37.773 [2024-07-24 22:46:35.805216] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:37.773 [2024-07-24 22:46:35.805276] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:37.773 [2024-07-24 22:46:35.805287] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:37.773 #3 NEW cov: 12088 ft: 13442 corp: 2/5b lim: 5 exec/s: 0 rss: 70Mb L: 4/4 MS: 1 InsertRepeatedBytes- 00:07:37.773 [2024-07-24 22:46:35.864925] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:37.773 [2024-07-24 22:46:35.864947] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.773 [2024-07-24 22:46:35.865017] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:37.773 [2024-07-24 22:46:35.865029] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:37.773 #4 NEW cov: 12094 ft: 13761 corp: 3/7b lim: 5 exec/s: 0 rss: 70Mb L: 2/4 MS: 1 CrossOver- 00:07:37.773 [2024-07-24 22:46:35.905198] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:37.773 [2024-07-24 22:46:35.905220] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.773 [2024-07-24 22:46:35.905276] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:37.773 [2024-07-24 22:46:35.905287] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:37.773 [2024-07-24 22:46:35.905338] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:37.773 [2024-07-24 22:46:35.905348] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:37.773 #5 NEW cov: 12179 ft: 14144 corp: 4/10b lim: 5 exec/s: 0 rss: 70Mb L: 3/4 MS: 1 CopyPart- 00:07:37.773 [2024-07-24 22:46:35.955520] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:37.773 [2024-07-24 22:46:35.955542] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.773 [2024-07-24 22:46:35.955610] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:37.773 [2024-07-24 22:46:35.955621] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:37.773 [2024-07-24 22:46:35.955671] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:37.773 [2024-07-24 22:46:35.955682] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:37.773 [2024-07-24 22:46:35.955732] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:37.773 [2024-07-24 22:46:35.955742] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:38.031 #6 NEW cov: 12179 ft: 14247 corp: 5/14b lim: 5 exec/s: 0 rss: 70Mb L: 4/4 MS: 1 ChangeBinInt- 00:07:38.031 [2024-07-24 22:46:36.005681] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.031 [2024-07-24 22:46:36.005703] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:38.031 [2024-07-24 22:46:36.005756] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.032 [2024-07-24 22:46:36.005767] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:38.032 [2024-07-24 22:46:36.005817] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.032 [2024-07-24 22:46:36.005828] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:38.032 [2024-07-24 22:46:36.005878] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.032 [2024-07-24 22:46:36.005888] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:38.032 #7 NEW cov: 12179 ft: 14308 corp: 6/18b lim: 5 exec/s: 0 rss: 70Mb L: 4/4 MS: 1 ChangeBit- 00:07:38.032 [2024-07-24 22:46:36.055793] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.032 [2024-07-24 22:46:36.055815] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:38.032 [2024-07-24 22:46:36.055868] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.032 [2024-07-24 22:46:36.055879] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:38.032 [2024-07-24 22:46:36.055929] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.032 [2024-07-24 22:46:36.055939] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:38.032 [2024-07-24 22:46:36.055989] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.032 [2024-07-24 22:46:36.055999] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:38.032 #8 NEW cov: 12179 ft: 14348 corp: 7/22b lim: 5 exec/s: 0 rss: 70Mb L: 4/4 MS: 1 CopyPart- 00:07:38.032 [2024-07-24 22:46:36.095729] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.032 [2024-07-24 22:46:36.095751] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:38.032 [2024-07-24 22:46:36.095818] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.032 [2024-07-24 22:46:36.095829] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:38.032 [2024-07-24 22:46:36.095880] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.032 [2024-07-24 22:46:36.095890] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:38.032 #9 NEW cov: 12179 ft: 14410 corp: 8/25b lim: 5 exec/s: 0 rss: 70Mb L: 3/4 MS: 1 CrossOver- 00:07:38.032 [2024-07-24 22:46:36.136046] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.032 [2024-07-24 22:46:36.136067] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:38.032 [2024-07-24 22:46:36.136139] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.032 [2024-07-24 22:46:36.136151] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:38.032 [2024-07-24 22:46:36.136202] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.032 [2024-07-24 22:46:36.136211] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:38.032 [2024-07-24 22:46:36.136263] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.032 [2024-07-24 22:46:36.136273] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:38.032 #10 NEW cov: 12179 ft: 14500 corp: 9/29b lim: 5 exec/s: 0 rss: 70Mb L: 4/4 MS: 1 InsertByte- 00:07:38.032 [2024-07-24 22:46:36.186177] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.032 [2024-07-24 22:46:36.186201] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:38.032 [2024-07-24 22:46:36.186267] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.032 [2024-07-24 22:46:36.186278] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:38.032 [2024-07-24 22:46:36.186330] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.032 [2024-07-24 22:46:36.186341] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:38.032 [2024-07-24 22:46:36.186390] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.032 [2024-07-24 22:46:36.186401] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:38.032 #11 NEW cov: 12179 ft: 14537 corp: 10/33b lim: 5 exec/s: 0 rss: 70Mb L: 4/4 MS: 1 ShuffleBytes- 00:07:38.032 [2024-07-24 22:46:36.236352] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.032 [2024-07-24 22:46:36.236374] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:38.032 [2024-07-24 22:46:36.236425] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.032 [2024-07-24 22:46:36.236436] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:38.032 [2024-07-24 22:46:36.236487] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.032 [2024-07-24 22:46:36.236498] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:38.032 [2024-07-24 22:46:36.236549] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.032 [2024-07-24 22:46:36.236559] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:38.291 #12 NEW cov: 12179 ft: 14555 corp: 11/37b lim: 5 exec/s: 0 rss: 70Mb L: 4/4 MS: 1 ChangeByte- 00:07:38.291 [2024-07-24 22:46:36.286493] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.291 [2024-07-24 22:46:36.286515] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:38.291 [2024-07-24 22:46:36.286567] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.291 [2024-07-24 22:46:36.286578] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:38.291 [2024-07-24 22:46:36.286631] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.291 [2024-07-24 22:46:36.286641] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:38.291 [2024-07-24 22:46:36.286694] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.291 [2024-07-24 22:46:36.286706] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:38.291 #13 NEW cov: 12179 ft: 14570 corp: 12/41b lim: 5 exec/s: 0 rss: 70Mb L: 4/4 MS: 1 ShuffleBytes- 00:07:38.291 [2024-07-24 22:46:36.336468] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.291 [2024-07-24 22:46:36.336492] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:38.291 [2024-07-24 22:46:36.336544] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.291 [2024-07-24 22:46:36.336555] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:38.291 [2024-07-24 22:46:36.336605] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.291 [2024-07-24 22:46:36.336616] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:38.291 #14 NEW cov: 12179 ft: 14572 corp: 13/44b lim: 5 exec/s: 0 rss: 71Mb L: 3/4 MS: 1 ShuffleBytes- 00:07:38.291 [2024-07-24 22:46:36.396625] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.291 [2024-07-24 22:46:36.396649] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:38.291 [2024-07-24 22:46:36.396701] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.291 [2024-07-24 22:46:36.396712] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:38.291 [2024-07-24 22:46:36.396763] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.291 [2024-07-24 22:46:36.396773] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:38.291 #15 NEW cov: 12179 ft: 14598 corp: 14/47b lim: 5 exec/s: 0 rss: 71Mb L: 3/4 MS: 1 ShuffleBytes- 00:07:38.291 [2024-07-24 22:46:36.436864] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.291 [2024-07-24 22:46:36.436887] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:38.291 [2024-07-24 22:46:36.436940] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.291 [2024-07-24 22:46:36.436950] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:38.291 [2024-07-24 22:46:36.437016] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.291 [2024-07-24 22:46:36.437027] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:38.291 [2024-07-24 22:46:36.437082] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:0000000c cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.291 [2024-07-24 22:46:36.437093] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:38.291 #16 NEW cov: 12179 ft: 14681 corp: 15/51b lim: 5 exec/s: 0 rss: 71Mb L: 4/4 MS: 1 InsertByte- 00:07:38.291 [2024-07-24 22:46:36.487068] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.291 [2024-07-24 22:46:36.487099] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:38.291 [2024-07-24 22:46:36.487152] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.291 [2024-07-24 22:46:36.487163] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:38.291 [2024-07-24 22:46:36.487213] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.291 [2024-07-24 22:46:36.487223] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:38.291 [2024-07-24 22:46:36.487274] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.291 [2024-07-24 22:46:36.487284] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:38.551 #17 NEW cov: 12179 ft: 14698 corp: 16/55b lim: 5 exec/s: 0 rss: 71Mb L: 4/4 MS: 1 ShuffleBytes- 00:07:38.551 [2024-07-24 22:46:36.526804] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.551 [2024-07-24 22:46:36.526827] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:38.551 [2024-07-24 22:46:36.526880] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.551 [2024-07-24 22:46:36.526890] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:38.551 #18 NEW cov: 12179 ft: 14725 corp: 17/57b lim: 5 exec/s: 0 rss: 71Mb L: 2/4 MS: 1 ChangeByte- 00:07:38.551 [2024-07-24 22:46:36.567380] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.551 [2024-07-24 22:46:36.567403] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:38.551 [2024-07-24 22:46:36.567469] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.551 [2024-07-24 22:46:36.567480] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:38.551 [2024-07-24 22:46:36.567532] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.551 [2024-07-24 22:46:36.567542] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:38.551 [2024-07-24 22:46:36.567593] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.551 [2024-07-24 22:46:36.567604] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:38.551 [2024-07-24 22:46:36.567656] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.551 [2024-07-24 22:46:36.567666] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:38.551 #19 NEW cov: 12179 ft: 14816 corp: 18/62b lim: 5 exec/s: 0 rss: 71Mb L: 5/5 MS: 1 CrossOver- 00:07:38.551 [2024-07-24 22:46:36.607541] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.551 [2024-07-24 22:46:36.607564] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:38.551 [2024-07-24 22:46:36.607619] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.551 [2024-07-24 22:46:36.607630] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:38.551 [2024-07-24 22:46:36.607682] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.551 [2024-07-24 22:46:36.607692] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:38.551 [2024-07-24 22:46:36.607744] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.551 [2024-07-24 22:46:36.607754] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:38.551 [2024-07-24 22:46:36.607806] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:0000000c cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.551 [2024-07-24 22:46:36.607816] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:38.551 NEW_FUNC[1/1]: 0x1a887d0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:07:38.551 #20 NEW cov: 12202 ft: 14845 corp: 19/67b lim: 5 exec/s: 0 rss: 72Mb L: 5/5 MS: 1 InsertByte- 00:07:38.810 [2024-07-24 22:46:36.758588] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.810 [2024-07-24 22:46:36.758641] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:38.810 [2024-07-24 22:46:36.758723] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.810 [2024-07-24 22:46:36.758742] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:38.810 [2024-07-24 22:46:36.758820] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.810 [2024-07-24 22:46:36.758839] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:38.810 [2024-07-24 22:46:36.758914] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.810 [2024-07-24 22:46:36.758933] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:38.810 [2024-07-24 22:46:36.759006] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.810 [2024-07-24 22:46:36.759025] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:38.810 #21 NEW cov: 12202 ft: 14874 corp: 20/72b lim: 5 exec/s: 21 rss: 72Mb L: 5/5 MS: 1 InsertByte- 00:07:38.810 [2024-07-24 22:46:36.817839] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.810 [2024-07-24 22:46:36.817863] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:38.810 [2024-07-24 22:46:36.817934] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.810 [2024-07-24 22:46:36.817946] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:38.810 #22 NEW cov: 12202 ft: 14934 corp: 21/74b lim: 5 exec/s: 22 rss: 72Mb L: 2/5 MS: 1 EraseBytes- 00:07:38.810 [2024-07-24 22:46:36.857749] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.810 [2024-07-24 22:46:36.857771] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:38.810 #23 NEW cov: 12202 ft: 14955 corp: 22/75b lim: 5 exec/s: 23 rss: 72Mb L: 1/5 MS: 1 CrossOver- 00:07:38.810 [2024-07-24 22:46:36.897848] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.810 [2024-07-24 22:46:36.897870] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:38.811 #24 NEW cov: 12202 ft: 15039 corp: 23/76b lim: 5 exec/s: 24 rss: 72Mb L: 1/5 MS: 1 ChangeByte- 00:07:38.811 [2024-07-24 22:46:36.938340] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.811 [2024-07-24 22:46:36.938362] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:38.811 [2024-07-24 22:46:36.938414] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.811 [2024-07-24 22:46:36.938425] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:38.811 [2024-07-24 22:46:36.938478] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.811 [2024-07-24 22:46:36.938488] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:38.811 #25 NEW cov: 12202 ft: 15051 corp: 24/79b lim: 5 exec/s: 25 rss: 72Mb L: 3/5 MS: 1 EraseBytes- 00:07:38.811 [2024-07-24 22:46:36.978063] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.811 [2024-07-24 22:46:36.978090] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:38.811 #26 NEW cov: 12202 ft: 15071 corp: 25/80b lim: 5 exec/s: 26 rss: 72Mb L: 1/5 MS: 1 EraseBytes- 00:07:39.069 [2024-07-24 22:46:37.028937] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.069 [2024-07-24 22:46:37.028958] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:39.069 [2024-07-24 22:46:37.029028] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.069 [2024-07-24 22:46:37.029038] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:39.069 [2024-07-24 22:46:37.029096] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.069 [2024-07-24 22:46:37.029107] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:39.069 [2024-07-24 22:46:37.029162] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.069 [2024-07-24 22:46:37.029173] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:39.069 [2024-07-24 22:46:37.029226] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.069 [2024-07-24 22:46:37.029236] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:39.069 #27 NEW cov: 12202 ft: 15090 corp: 26/85b lim: 5 exec/s: 27 rss: 72Mb L: 5/5 MS: 1 CopyPart- 00:07:39.069 [2024-07-24 22:46:37.079055] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.069 [2024-07-24 22:46:37.079080] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:39.069 [2024-07-24 22:46:37.079151] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.069 [2024-07-24 22:46:37.079162] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:39.070 [2024-07-24 22:46:37.079215] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.070 [2024-07-24 22:46:37.079225] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:39.070 [2024-07-24 22:46:37.079289] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.070 [2024-07-24 22:46:37.079299] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:39.070 [2024-07-24 22:46:37.079350] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.070 [2024-07-24 22:46:37.079360] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:39.070 #28 NEW cov: 12202 ft: 15113 corp: 27/90b lim: 5 exec/s: 28 rss: 72Mb L: 5/5 MS: 1 CrossOver- 00:07:39.070 [2024-07-24 22:46:37.129213] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.070 [2024-07-24 22:46:37.129235] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:39.070 [2024-07-24 22:46:37.129292] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.070 [2024-07-24 22:46:37.129303] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:39.070 [2024-07-24 22:46:37.129355] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.070 [2024-07-24 22:46:37.129366] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:39.070 [2024-07-24 22:46:37.129419] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.070 [2024-07-24 22:46:37.129430] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:39.070 [2024-07-24 22:46:37.129489] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:0000000c cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.070 [2024-07-24 22:46:37.129499] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:39.070 #29 NEW cov: 12202 ft: 15130 corp: 28/95b lim: 5 exec/s: 29 rss: 73Mb L: 5/5 MS: 1 ChangeBinInt- 00:07:39.070 [2024-07-24 22:46:37.178839] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.070 [2024-07-24 22:46:37.178861] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:39.070 [2024-07-24 22:46:37.178918] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.070 [2024-07-24 22:46:37.178929] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:39.070 #30 NEW cov: 12202 ft: 15153 corp: 29/97b lim: 5 exec/s: 30 rss: 73Mb L: 2/5 MS: 1 ShuffleBytes- 00:07:39.070 [2024-07-24 22:46:37.229332] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.070 [2024-07-24 22:46:37.229354] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:39.070 [2024-07-24 22:46:37.229407] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.070 [2024-07-24 22:46:37.229417] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:39.070 [2024-07-24 22:46:37.229472] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.070 [2024-07-24 22:46:37.229483] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:39.070 [2024-07-24 22:46:37.229552] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.070 [2024-07-24 22:46:37.229563] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:39.070 #31 NEW cov: 12202 ft: 15176 corp: 30/101b lim: 5 exec/s: 31 rss: 73Mb L: 4/5 MS: 1 ChangeBinInt- 00:07:39.070 [2024-07-24 22:46:37.269375] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.070 [2024-07-24 22:46:37.269397] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:39.070 [2024-07-24 22:46:37.269466] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.070 [2024-07-24 22:46:37.269477] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:39.070 [2024-07-24 22:46:37.269530] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.070 [2024-07-24 22:46:37.269541] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:39.070 [2024-07-24 22:46:37.269596] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.070 [2024-07-24 22:46:37.269606] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:39.329 #32 NEW cov: 12202 ft: 15186 corp: 31/105b lim: 5 exec/s: 32 rss: 73Mb L: 4/5 MS: 1 ChangeByte- 00:07:39.329 [2024-07-24 22:46:37.309391] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.329 [2024-07-24 22:46:37.309413] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:39.329 [2024-07-24 22:46:37.309468] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.329 [2024-07-24 22:46:37.309478] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:39.329 [2024-07-24 22:46:37.309549] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.329 [2024-07-24 22:46:37.309560] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:39.329 #33 NEW cov: 12202 ft: 15200 corp: 32/108b lim: 5 exec/s: 33 rss: 73Mb L: 3/5 MS: 1 ChangeBinInt- 00:07:39.329 [2024-07-24 22:46:37.349271] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.329 [2024-07-24 22:46:37.349293] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:39.329 [2024-07-24 22:46:37.349346] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.329 [2024-07-24 22:46:37.349357] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:39.329 #34 NEW cov: 12202 ft: 15232 corp: 33/110b lim: 5 exec/s: 34 rss: 74Mb L: 2/5 MS: 1 EraseBytes- 00:07:39.329 [2024-07-24 22:46:37.399905] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.329 [2024-07-24 22:46:37.399927] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:39.329 [2024-07-24 22:46:37.399997] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.329 [2024-07-24 22:46:37.400008] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:39.329 [2024-07-24 22:46:37.400060] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.329 [2024-07-24 22:46:37.400070] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:39.329 [2024-07-24 22:46:37.400126] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.329 [2024-07-24 22:46:37.400137] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:39.329 [2024-07-24 22:46:37.400190] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.329 [2024-07-24 22:46:37.400201] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:39.329 #35 NEW cov: 12202 ft: 15296 corp: 34/115b lim: 5 exec/s: 35 rss: 74Mb L: 5/5 MS: 1 CopyPart- 00:07:39.329 [2024-07-24 22:46:37.450070] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.329 [2024-07-24 22:46:37.450097] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:39.329 [2024-07-24 22:46:37.450166] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.329 [2024-07-24 22:46:37.450177] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:39.329 [2024-07-24 22:46:37.450229] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.329 [2024-07-24 22:46:37.450240] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:39.329 [2024-07-24 22:46:37.450291] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.329 [2024-07-24 22:46:37.450301] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:39.329 [2024-07-24 22:46:37.450353] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.329 [2024-07-24 22:46:37.450364] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:39.329 #36 NEW cov: 12202 ft: 15304 corp: 35/120b lim: 5 exec/s: 36 rss: 74Mb L: 5/5 MS: 1 InsertByte- 00:07:39.329 [2024-07-24 22:46:37.489667] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.329 [2024-07-24 22:46:37.489690] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:39.329 [2024-07-24 22:46:37.489743] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.329 [2024-07-24 22:46:37.489754] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:39.329 #37 NEW cov: 12202 ft: 15331 corp: 36/122b lim: 5 exec/s: 37 rss: 74Mb L: 2/5 MS: 1 ChangeByte- 00:07:39.329 [2024-07-24 22:46:37.530184] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.329 [2024-07-24 22:46:37.530206] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:39.329 [2024-07-24 22:46:37.530258] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.329 [2024-07-24 22:46:37.530269] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:39.329 [2024-07-24 22:46:37.530322] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.329 [2024-07-24 22:46:37.530333] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:39.329 [2024-07-24 22:46:37.530385] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.330 [2024-07-24 22:46:37.530396] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:39.589 #38 NEW cov: 12202 ft: 15382 corp: 37/126b lim: 5 exec/s: 38 rss: 74Mb L: 4/5 MS: 1 ChangeBinInt- 00:07:39.589 [2024-07-24 22:46:37.570060] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.589 [2024-07-24 22:46:37.570133] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:39.589 [2024-07-24 22:46:37.570201] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.589 [2024-07-24 22:46:37.570212] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:39.589 [2024-07-24 22:46:37.570290] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.589 [2024-07-24 22:46:37.570301] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:39.589 #39 NEW cov: 12202 ft: 15390 corp: 38/129b lim: 5 exec/s: 39 rss: 74Mb L: 3/5 MS: 1 ChangeBit- 00:07:39.589 [2024-07-24 22:46:37.610505] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.589 [2024-07-24 22:46:37.610527] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:39.589 [2024-07-24 22:46:37.610581] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.589 [2024-07-24 22:46:37.610592] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:39.589 [2024-07-24 22:46:37.610644] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.589 [2024-07-24 22:46:37.610655] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:39.589 [2024-07-24 22:46:37.610706] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.589 [2024-07-24 22:46:37.610716] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:39.589 [2024-07-24 22:46:37.610769] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:0000000c cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.589 [2024-07-24 22:46:37.610778] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:39.589 #40 NEW cov: 12202 ft: 15394 corp: 39/134b lim: 5 exec/s: 40 rss: 74Mb L: 5/5 MS: 1 ChangeBinInt- 00:07:39.589 [2024-07-24 22:46:37.660525] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.589 [2024-07-24 22:46:37.660549] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:39.589 [2024-07-24 22:46:37.660603] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.589 [2024-07-24 22:46:37.660613] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:39.589 [2024-07-24 22:46:37.660666] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.589 [2024-07-24 22:46:37.660677] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:39.589 [2024-07-24 22:46:37.660729] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.589 [2024-07-24 22:46:37.660743] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:39.589 #41 NEW cov: 12202 ft: 15399 corp: 40/138b lim: 5 exec/s: 41 rss: 74Mb L: 4/5 MS: 1 CrossOver- 00:07:39.589 [2024-07-24 22:46:37.700727] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.589 [2024-07-24 22:46:37.700750] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:39.589 [2024-07-24 22:46:37.700819] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.589 [2024-07-24 22:46:37.700831] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:39.589 [2024-07-24 22:46:37.700884] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.589 [2024-07-24 22:46:37.700894] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:39.589 [2024-07-24 22:46:37.700945] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.589 [2024-07-24 22:46:37.700956] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:39.589 [2024-07-24 22:46:37.701007] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:0000000c cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.589 [2024-07-24 22:46:37.701018] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:39.589 #42 NEW cov: 12202 ft: 15428 corp: 41/143b lim: 5 exec/s: 42 rss: 74Mb L: 5/5 MS: 1 ShuffleBytes- 00:07:39.589 [2024-07-24 22:46:37.740715] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.589 [2024-07-24 22:46:37.740738] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:39.589 [2024-07-24 22:46:37.740805] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.589 [2024-07-24 22:46:37.740816] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:39.589 [2024-07-24 22:46:37.740867] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.589 [2024-07-24 22:46:37.740878] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:39.590 [2024-07-24 22:46:37.740928] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.590 [2024-07-24 22:46:37.740939] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:39.590 #43 NEW cov: 12202 ft: 15432 corp: 42/147b lim: 5 exec/s: 21 rss: 74Mb L: 4/5 MS: 1 ChangeByte- 00:07:39.590 #43 DONE cov: 12202 ft: 15432 corp: 42/147b lim: 5 exec/s: 21 rss: 74Mb 00:07:39.590 Done 43 runs in 2 second(s) 00:07:39.849 22:46:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_8.conf /var/tmp/suppress_nvmf_fuzz 00:07:39.849 22:46:37 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:39.849 22:46:37 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:39.849 22:46:37 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 9 1 0x1 00:07:39.849 22:46:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=9 00:07:39.849 22:46:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:39.849 22:46:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:39.849 22:46:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 00:07:39.849 22:46:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_9.conf 00:07:39.849 22:46:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:39.849 22:46:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:39.849 22:46:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 9 00:07:39.849 22:46:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4409 00:07:39.849 22:46:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 00:07:39.849 22:46:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4409' 00:07:39.849 22:46:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4409"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:39.849 22:46:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:39.849 22:46:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:39.849 22:46:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4409' -c /tmp/fuzz_json_9.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 -Z 9 00:07:39.849 [2024-07-24 22:46:37.927283] Starting SPDK v24.09-pre git sha1 f41dbc235 / DPDK 24.03.0 initialization... 00:07:39.849 [2024-07-24 22:46:37.927362] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid482335 ] 00:07:39.849 EAL: No free 2048 kB hugepages reported on node 1 00:07:40.108 [2024-07-24 22:46:38.182323] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.108 [2024-07-24 22:46:38.265754] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.368 [2024-07-24 22:46:38.324230] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:40.368 [2024-07-24 22:46:38.340470] tcp.c:1007:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4409 *** 00:07:40.368 INFO: Running with entropic power schedule (0xFF, 100). 00:07:40.368 INFO: Seed: 3072557233 00:07:40.368 INFO: Loaded 1 modules (358985 inline 8-bit counters): 358985 [0x29c620c, 0x2a1dc55), 00:07:40.368 INFO: Loaded 1 PC tables (358985 PCs): 358985 [0x2a1dc58,0x2f980e8), 00:07:40.368 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 00:07:40.368 INFO: A corpus is not provided, starting from an empty corpus 00:07:40.368 [2024-07-24 22:46:38.385145] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.368 [2024-07-24 22:46:38.385176] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.368 #2 INITED cov: 11970 ft: 11959 corp: 1/1b exec/s: 0 rss: 70Mb 00:07:40.368 [2024-07-24 22:46:38.435150] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.368 [2024-07-24 22:46:38.435178] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.368 #3 NEW cov: 12088 ft: 12508 corp: 2/2b lim: 5 exec/s: 0 rss: 70Mb L: 1/1 MS: 1 CopyPart- 00:07:40.368 [2024-07-24 22:46:38.515414] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.368 [2024-07-24 22:46:38.515443] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.368 [2024-07-24 22:46:38.515473] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.368 [2024-07-24 22:46:38.515486] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:40.368 #4 NEW cov: 12094 ft: 13478 corp: 3/4b lim: 5 exec/s: 0 rss: 70Mb L: 2/2 MS: 1 CrossOver- 00:07:40.627 [2024-07-24 22:46:38.575539] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.627 [2024-07-24 22:46:38.575566] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.627 #5 NEW cov: 12179 ft: 13739 corp: 4/5b lim: 5 exec/s: 0 rss: 70Mb L: 1/2 MS: 1 ChangeByte- 00:07:40.627 [2024-07-24 22:46:38.625647] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.627 [2024-07-24 22:46:38.625673] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.627 #6 NEW cov: 12179 ft: 13820 corp: 5/6b lim: 5 exec/s: 0 rss: 70Mb L: 1/2 MS: 1 ChangeByte- 00:07:40.627 [2024-07-24 22:46:38.705932] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.627 [2024-07-24 22:46:38.705959] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.627 [2024-07-24 22:46:38.705990] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.627 [2024-07-24 22:46:38.706003] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:40.627 #7 NEW cov: 12179 ft: 13908 corp: 6/8b lim: 5 exec/s: 0 rss: 70Mb L: 2/2 MS: 1 InsertByte- 00:07:40.627 [2024-07-24 22:46:38.786277] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.627 [2024-07-24 22:46:38.786303] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.627 [2024-07-24 22:46:38.786348] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000c cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.627 [2024-07-24 22:46:38.786360] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:40.627 [2024-07-24 22:46:38.786386] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:0000000c cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.627 [2024-07-24 22:46:38.786398] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:40.627 [2024-07-24 22:46:38.786424] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:0000000c cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.627 [2024-07-24 22:46:38.786436] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:40.627 [2024-07-24 22:46:38.786461] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:0000000c cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.627 [2024-07-24 22:46:38.786476] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:40.627 #8 NEW cov: 12179 ft: 14305 corp: 7/13b lim: 5 exec/s: 0 rss: 70Mb L: 5/5 MS: 1 InsertRepeatedBytes- 00:07:40.886 [2024-07-24 22:46:38.846448] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.886 [2024-07-24 22:46:38.846475] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.886 [2024-07-24 22:46:38.846505] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000c cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.886 [2024-07-24 22:46:38.846518] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:40.886 [2024-07-24 22:46:38.846543] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:0000000c cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.886 [2024-07-24 22:46:38.846556] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:40.886 [2024-07-24 22:46:38.846581] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.886 [2024-07-24 22:46:38.846593] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:40.886 [2024-07-24 22:46:38.846619] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:0000000c cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.886 [2024-07-24 22:46:38.846631] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:40.886 #9 NEW cov: 12179 ft: 14359 corp: 8/18b lim: 5 exec/s: 0 rss: 70Mb L: 5/5 MS: 1 ChangeBinInt- 00:07:40.886 [2024-07-24 22:46:38.926409] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.886 [2024-07-24 22:46:38.926434] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.886 #10 NEW cov: 12179 ft: 14385 corp: 9/19b lim: 5 exec/s: 0 rss: 71Mb L: 1/5 MS: 1 EraseBytes- 00:07:40.886 [2024-07-24 22:46:39.006721] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.886 [2024-07-24 22:46:39.006749] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.886 [2024-07-24 22:46:39.006794] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.886 [2024-07-24 22:46:39.006808] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:40.886 #11 NEW cov: 12179 ft: 14479 corp: 10/21b lim: 5 exec/s: 0 rss: 71Mb L: 2/5 MS: 1 ChangeByte- 00:07:40.886 [2024-07-24 22:46:39.077049] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.886 [2024-07-24 22:46:39.077088] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.886 [2024-07-24 22:46:39.077119] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.886 [2024-07-24 22:46:39.077132] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:40.886 [2024-07-24 22:46:39.077162] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.886 [2024-07-24 22:46:39.077175] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:40.886 [2024-07-24 22:46:39.077200] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.886 [2024-07-24 22:46:39.077213] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:40.886 [2024-07-24 22:46:39.077238] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.886 [2024-07-24 22:46:39.077250] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:41.145 #12 NEW cov: 12179 ft: 14512 corp: 11/26b lim: 5 exec/s: 0 rss: 71Mb L: 5/5 MS: 1 InsertRepeatedBytes- 00:07:41.145 [2024-07-24 22:46:39.167219] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.145 [2024-07-24 22:46:39.167247] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:41.145 [2024-07-24 22:46:39.167292] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.145 [2024-07-24 22:46:39.167304] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:41.145 [2024-07-24 22:46:39.167330] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.145 [2024-07-24 22:46:39.167342] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:41.145 [2024-07-24 22:46:39.167368] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.145 [2024-07-24 22:46:39.167379] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:41.145 #13 NEW cov: 12179 ft: 14559 corp: 12/30b lim: 5 exec/s: 0 rss: 71Mb L: 4/5 MS: 1 EraseBytes- 00:07:41.145 [2024-07-24 22:46:39.257571] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.145 [2024-07-24 22:46:39.257599] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:41.145 [2024-07-24 22:46:39.257629] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000c cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.145 [2024-07-24 22:46:39.257642] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:41.145 [2024-07-24 22:46:39.257668] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:0000000c cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.145 [2024-07-24 22:46:39.257681] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:41.145 [2024-07-24 22:46:39.257707] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.145 [2024-07-24 22:46:39.257735] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:41.145 [2024-07-24 22:46:39.257765] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:0000000c cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.145 [2024-07-24 22:46:39.257777] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:41.404 NEW_FUNC[1/1]: 0x1a887d0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:07:41.404 #14 NEW cov: 12202 ft: 14621 corp: 13/35b lim: 5 exec/s: 14 rss: 72Mb L: 5/5 MS: 1 ChangeByte- 00:07:41.404 [2024-07-24 22:46:39.448035] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.404 [2024-07-24 22:46:39.448087] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:41.404 [2024-07-24 22:46:39.448120] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.404 [2024-07-24 22:46:39.448133] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:41.404 [2024-07-24 22:46:39.448159] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.404 [2024-07-24 22:46:39.448171] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:41.404 #15 NEW cov: 12202 ft: 14838 corp: 14/38b lim: 5 exec/s: 15 rss: 72Mb L: 3/5 MS: 1 CopyPart- 00:07:41.404 [2024-07-24 22:46:39.528029] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.404 [2024-07-24 22:46:39.528058] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:41.404 [2024-07-24 22:46:39.528110] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.404 [2024-07-24 22:46:39.528124] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:41.404 #16 NEW cov: 12202 ft: 14859 corp: 15/40b lim: 5 exec/s: 16 rss: 72Mb L: 2/5 MS: 1 ChangeByte- 00:07:41.404 [2024-07-24 22:46:39.578392] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.404 [2024-07-24 22:46:39.578420] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:41.404 [2024-07-24 22:46:39.578450] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.404 [2024-07-24 22:46:39.578463] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:41.404 [2024-07-24 22:46:39.578488] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.404 [2024-07-24 22:46:39.578501] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:41.404 [2024-07-24 22:46:39.578527] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.404 [2024-07-24 22:46:39.578539] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:41.404 [2024-07-24 22:46:39.578565] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.404 [2024-07-24 22:46:39.578581] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:41.663 #17 NEW cov: 12202 ft: 14895 corp: 16/45b lim: 5 exec/s: 17 rss: 72Mb L: 5/5 MS: 1 InsertByte- 00:07:41.663 [2024-07-24 22:46:39.668634] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.663 [2024-07-24 22:46:39.668661] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:41.663 [2024-07-24 22:46:39.668691] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.663 [2024-07-24 22:46:39.668704] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:41.663 [2024-07-24 22:46:39.668730] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.663 [2024-07-24 22:46:39.668742] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:41.663 [2024-07-24 22:46:39.668768] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.663 [2024-07-24 22:46:39.668780] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:41.663 [2024-07-24 22:46:39.668805] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.663 [2024-07-24 22:46:39.668817] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:41.663 #18 NEW cov: 12202 ft: 14918 corp: 17/50b lim: 5 exec/s: 18 rss: 72Mb L: 5/5 MS: 1 ChangeBit- 00:07:41.663 [2024-07-24 22:46:39.728762] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.663 [2024-07-24 22:46:39.728789] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:41.663 [2024-07-24 22:46:39.728819] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.663 [2024-07-24 22:46:39.728832] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:41.663 [2024-07-24 22:46:39.728873] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.663 [2024-07-24 22:46:39.728886] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:41.663 [2024-07-24 22:46:39.728912] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.663 [2024-07-24 22:46:39.728924] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:41.663 [2024-07-24 22:46:39.728950] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.663 [2024-07-24 22:46:39.728963] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:41.663 #19 NEW cov: 12202 ft: 14944 corp: 18/55b lim: 5 exec/s: 19 rss: 72Mb L: 5/5 MS: 1 CopyPart- 00:07:41.663 [2024-07-24 22:46:39.808761] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.663 [2024-07-24 22:46:39.808789] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:41.663 [2024-07-24 22:46:39.808834] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.663 [2024-07-24 22:46:39.808847] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:41.922 #20 NEW cov: 12202 ft: 14955 corp: 19/57b lim: 5 exec/s: 20 rss: 72Mb L: 2/5 MS: 1 ShuffleBytes- 00:07:41.922 [2024-07-24 22:46:39.898945] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.922 [2024-07-24 22:46:39.898971] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:41.922 #21 NEW cov: 12202 ft: 14988 corp: 20/58b lim: 5 exec/s: 21 rss: 72Mb L: 1/5 MS: 1 ShuffleBytes- 00:07:41.922 [2024-07-24 22:46:39.959212] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.922 [2024-07-24 22:46:39.959238] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:41.922 [2024-07-24 22:46:39.959282] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.922 [2024-07-24 22:46:39.959295] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:41.922 [2024-07-24 22:46:39.959320] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.922 [2024-07-24 22:46:39.959333] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:41.922 #22 NEW cov: 12202 ft: 15003 corp: 21/61b lim: 5 exec/s: 22 rss: 72Mb L: 3/5 MS: 1 ChangeBit- 00:07:41.922 [2024-07-24 22:46:40.049395] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.922 [2024-07-24 22:46:40.049426] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:41.922 #23 NEW cov: 12202 ft: 15012 corp: 22/62b lim: 5 exec/s: 23 rss: 72Mb L: 1/5 MS: 1 ChangeASCIIInt- 00:07:41.922 [2024-07-24 22:46:40.099645] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000c cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.922 [2024-07-24 22:46:40.099672] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:41.922 [2024-07-24 22:46:40.099701] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000c cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.922 [2024-07-24 22:46:40.099729] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:41.922 [2024-07-24 22:46:40.099755] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:0000000c cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.922 [2024-07-24 22:46:40.099767] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:41.922 [2024-07-24 22:46:40.099792] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:0000000c cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.922 [2024-07-24 22:46:40.099805] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:42.181 #24 NEW cov: 12202 ft: 15051 corp: 23/66b lim: 5 exec/s: 24 rss: 72Mb L: 4/5 MS: 1 EraseBytes- 00:07:42.181 [2024-07-24 22:46:40.159713] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.181 [2024-07-24 22:46:40.159740] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:42.181 [2024-07-24 22:46:40.159784] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.181 [2024-07-24 22:46:40.159797] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:42.181 #25 NEW cov: 12202 ft: 15056 corp: 24/68b lim: 5 exec/s: 25 rss: 72Mb L: 2/5 MS: 1 ChangeBinInt- 00:07:42.181 [2024-07-24 22:46:40.209878] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.181 [2024-07-24 22:46:40.209904] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:42.181 [2024-07-24 22:46:40.209948] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.181 [2024-07-24 22:46:40.209961] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:42.181 [2024-07-24 22:46:40.209987] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.181 [2024-07-24 22:46:40.210000] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:42.181 #26 NEW cov: 12202 ft: 15080 corp: 25/71b lim: 5 exec/s: 26 rss: 73Mb L: 3/5 MS: 1 ChangeBit- 00:07:42.181 [2024-07-24 22:46:40.300099] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.181 [2024-07-24 22:46:40.300137] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:42.181 [2024-07-24 22:46:40.300182] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.181 [2024-07-24 22:46:40.300195] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:42.181 #27 NEW cov: 12202 ft: 15131 corp: 26/73b lim: 5 exec/s: 27 rss: 73Mb L: 2/5 MS: 1 EraseBytes- 00:07:42.181 [2024-07-24 22:46:40.360391] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.181 [2024-07-24 22:46:40.360418] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:42.181 [2024-07-24 22:46:40.360463] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.181 [2024-07-24 22:46:40.360475] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:42.181 [2024-07-24 22:46:40.360501] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.181 [2024-07-24 22:46:40.360514] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:42.181 [2024-07-24 22:46:40.360539] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.181 [2024-07-24 22:46:40.360555] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:42.181 [2024-07-24 22:46:40.360581] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.181 [2024-07-24 22:46:40.360593] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:42.441 #28 NEW cov: 12202 ft: 15146 corp: 27/78b lim: 5 exec/s: 14 rss: 73Mb L: 5/5 MS: 1 CopyPart- 00:07:42.441 #28 DONE cov: 12202 ft: 15146 corp: 27/78b lim: 5 exec/s: 14 rss: 73Mb 00:07:42.441 Done 28 runs in 2 second(s) 00:07:42.441 22:46:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_9.conf /var/tmp/suppress_nvmf_fuzz 00:07:42.441 22:46:40 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:42.441 22:46:40 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:42.441 22:46:40 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 10 1 0x1 00:07:42.441 22:46:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=10 00:07:42.441 22:46:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:42.441 22:46:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:42.441 22:46:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 00:07:42.441 22:46:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_10.conf 00:07:42.441 22:46:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:42.441 22:46:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:42.441 22:46:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 10 00:07:42.441 22:46:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4410 00:07:42.441 22:46:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 00:07:42.441 22:46:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4410' 00:07:42.441 22:46:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4410"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:42.441 22:46:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:42.441 22:46:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:42.441 22:46:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4410' -c /tmp/fuzz_json_10.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 -Z 10 00:07:42.441 [2024-07-24 22:46:40.562065] Starting SPDK v24.09-pre git sha1 f41dbc235 / DPDK 24.03.0 initialization... 00:07:42.441 [2024-07-24 22:46:40.562153] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid482740 ] 00:07:42.441 EAL: No free 2048 kB hugepages reported on node 1 00:07:42.700 [2024-07-24 22:46:40.816784] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:42.700 [2024-07-24 22:46:40.900247] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.969 [2024-07-24 22:46:40.958756] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:42.969 [2024-07-24 22:46:40.974999] tcp.c:1007:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4410 *** 00:07:42.969 INFO: Running with entropic power schedule (0xFF, 100). 00:07:42.969 INFO: Seed: 1411589580 00:07:42.969 INFO: Loaded 1 modules (358985 inline 8-bit counters): 358985 [0x29c620c, 0x2a1dc55), 00:07:42.969 INFO: Loaded 1 PC tables (358985 PCs): 358985 [0x2a1dc58,0x2f980e8), 00:07:42.969 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 00:07:42.969 INFO: A corpus is not provided, starting from an empty corpus 00:07:42.969 #2 INITED exec/s: 0 rss: 65Mb 00:07:42.969 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:42.969 This may also happen if the target rejected all inputs we tried so far 00:07:42.969 [2024-07-24 22:46:41.030526] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.969 [2024-07-24 22:46:41.030552] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:42.969 [2024-07-24 22:46:41.030628] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.969 [2024-07-24 22:46:41.030640] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:42.969 NEW_FUNC[1/700]: 0x490cf0 in fuzz_admin_security_receive_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:205 00:07:42.969 NEW_FUNC[2/700]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:42.969 #3 NEW cov: 11998 ft: 11983 corp: 2/18b lim: 40 exec/s: 0 rss: 71Mb L: 17/17 MS: 1 InsertRepeatedBytes- 00:07:43.233 [2024-07-24 22:46:41.181429] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:53535353 cdw11:53535353 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.233 [2024-07-24 22:46:41.181483] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.233 [2024-07-24 22:46:41.181564] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:53535353 cdw11:53535353 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.233 [2024-07-24 22:46:41.181584] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:43.233 [2024-07-24 22:46:41.181661] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:53535353 cdw11:53535353 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.233 [2024-07-24 22:46:41.181679] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:43.233 [2024-07-24 22:46:41.181757] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:53535353 cdw11:53535353 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.233 [2024-07-24 22:46:41.181775] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:43.233 #5 NEW cov: 12111 ft: 13132 corp: 3/55b lim: 40 exec/s: 0 rss: 71Mb L: 37/37 MS: 2 CrossOver-InsertRepeatedBytes- 00:07:43.233 [2024-07-24 22:46:41.230762] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.233 [2024-07-24 22:46:41.230787] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.233 #6 NEW cov: 12117 ft: 13653 corp: 4/68b lim: 40 exec/s: 0 rss: 71Mb L: 13/37 MS: 1 EraseBytes- 00:07:43.233 [2024-07-24 22:46:41.281297] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:53535353 cdw11:53535353 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.233 [2024-07-24 22:46:41.281320] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.233 [2024-07-24 22:46:41.281391] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:53535353 cdw11:53535353 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.233 [2024-07-24 22:46:41.281405] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:43.233 [2024-07-24 22:46:41.281462] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:53535353 cdw11:53535353 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.233 [2024-07-24 22:46:41.281473] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:43.233 [2024-07-24 22:46:41.281526] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:53535353 cdw11:53535553 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.234 [2024-07-24 22:46:41.281537] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:43.234 #7 NEW cov: 12202 ft: 13910 corp: 5/105b lim: 40 exec/s: 0 rss: 71Mb L: 37/37 MS: 1 ChangeByte- 00:07:43.234 [2024-07-24 22:46:41.331067] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:02ffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.234 [2024-07-24 22:46:41.331092] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.234 #9 NEW cov: 12202 ft: 14055 corp: 6/114b lim: 40 exec/s: 0 rss: 71Mb L: 9/37 MS: 2 ChangeBit-InsertRepeatedBytes- 00:07:43.234 [2024-07-24 22:46:41.371119] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:02fdffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.234 [2024-07-24 22:46:41.371141] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.234 #10 NEW cov: 12202 ft: 14185 corp: 7/124b lim: 40 exec/s: 0 rss: 71Mb L: 10/37 MS: 1 InsertByte- 00:07:43.234 [2024-07-24 22:46:41.421431] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.234 [2024-07-24 22:46:41.421452] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.234 [2024-07-24 22:46:41.421509] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.234 [2024-07-24 22:46:41.421520] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:43.492 #11 NEW cov: 12202 ft: 14278 corp: 8/141b lim: 40 exec/s: 0 rss: 72Mb L: 17/37 MS: 1 ShuffleBytes- 00:07:43.492 [2024-07-24 22:46:41.461434] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:02fdffff cdw11:ff40ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.492 [2024-07-24 22:46:41.461456] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.492 #12 NEW cov: 12202 ft: 14312 corp: 9/151b lim: 40 exec/s: 0 rss: 72Mb L: 10/37 MS: 1 ChangeByte- 00:07:43.493 [2024-07-24 22:46:41.511656] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:01000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.493 [2024-07-24 22:46:41.511678] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.493 [2024-07-24 22:46:41.511736] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.493 [2024-07-24 22:46:41.511747] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:43.493 #13 NEW cov: 12202 ft: 14368 corp: 10/168b lim: 40 exec/s: 0 rss: 72Mb L: 17/37 MS: 1 ChangeBinInt- 00:07:43.493 [2024-07-24 22:46:41.551942] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.493 [2024-07-24 22:46:41.551969] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.493 [2024-07-24 22:46:41.552027] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.493 [2024-07-24 22:46:41.552038] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:43.493 [2024-07-24 22:46:41.552098] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.493 [2024-07-24 22:46:41.552109] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:43.493 #14 NEW cov: 12202 ft: 14579 corp: 11/199b lim: 40 exec/s: 0 rss: 72Mb L: 31/37 MS: 1 InsertRepeatedBytes- 00:07:43.493 [2024-07-24 22:46:41.591903] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.493 [2024-07-24 22:46:41.591925] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.493 [2024-07-24 22:46:41.591983] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.493 [2024-07-24 22:46:41.591994] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:43.493 #15 NEW cov: 12202 ft: 14649 corp: 12/216b lim: 40 exec/s: 0 rss: 72Mb L: 17/37 MS: 1 ShuffleBytes- 00:07:43.493 [2024-07-24 22:46:41.631886] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00090000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.493 [2024-07-24 22:46:41.631908] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.493 #16 NEW cov: 12202 ft: 14675 corp: 13/229b lim: 40 exec/s: 0 rss: 72Mb L: 13/37 MS: 1 ChangeBinInt- 00:07:43.493 [2024-07-24 22:46:41.682220] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:02fdffff cdw11:ff858585 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.493 [2024-07-24 22:46:41.682243] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.493 [2024-07-24 22:46:41.682326] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:85858585 cdw11:85858585 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.493 [2024-07-24 22:46:41.682337] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:43.493 [2024-07-24 22:46:41.682388] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:85858585 cdw11:858585ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.493 [2024-07-24 22:46:41.682398] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:43.752 #17 NEW cov: 12202 ft: 14684 corp: 14/257b lim: 40 exec/s: 0 rss: 72Mb L: 28/37 MS: 1 InsertRepeatedBytes- 00:07:43.752 [2024-07-24 22:46:41.722129] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.752 [2024-07-24 22:46:41.722152] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.752 #18 NEW cov: 12202 ft: 14707 corp: 15/269b lim: 40 exec/s: 0 rss: 72Mb L: 12/37 MS: 1 EraseBytes- 00:07:43.752 [2024-07-24 22:46:41.772645] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:53535353 cdw11:53535353 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.752 [2024-07-24 22:46:41.772672] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.752 [2024-07-24 22:46:41.772726] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:53535353 cdw11:53535353 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.752 [2024-07-24 22:46:41.772737] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:43.752 [2024-07-24 22:46:41.772791] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:53535353 cdw11:53535353 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.752 [2024-07-24 22:46:41.772801] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:43.752 [2024-07-24 22:46:41.772857] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:53535353 cdw11:53535353 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.752 [2024-07-24 22:46:41.772867] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:43.752 #19 NEW cov: 12202 ft: 14729 corp: 16/306b lim: 40 exec/s: 0 rss: 72Mb L: 37/37 MS: 1 CopyPart- 00:07:43.752 [2024-07-24 22:46:41.812751] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.752 [2024-07-24 22:46:41.812774] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.752 [2024-07-24 22:46:41.812831] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.752 [2024-07-24 22:46:41.812841] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:43.752 [2024-07-24 22:46:41.812895] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.752 [2024-07-24 22:46:41.812906] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:43.752 [2024-07-24 22:46:41.812960] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.752 [2024-07-24 22:46:41.812970] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:43.752 #20 NEW cov: 12202 ft: 14771 corp: 17/340b lim: 40 exec/s: 0 rss: 72Mb L: 34/37 MS: 1 CopyPart- 00:07:43.752 [2024-07-24 22:46:41.862883] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:53535353 cdw11:53535353 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.752 [2024-07-24 22:46:41.862905] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.752 [2024-07-24 22:46:41.862963] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:53535353 cdw11:53535353 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.752 [2024-07-24 22:46:41.862974] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:43.752 [2024-07-24 22:46:41.863029] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:53535353 cdw11:53534d53 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.752 [2024-07-24 22:46:41.863038] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:43.752 [2024-07-24 22:46:41.863098] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:53535353 cdw11:53535353 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.752 [2024-07-24 22:46:41.863109] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:43.752 #21 NEW cov: 12202 ft: 14789 corp: 18/377b lim: 40 exec/s: 0 rss: 72Mb L: 37/37 MS: 1 ChangeBinInt- 00:07:43.752 [2024-07-24 22:46:41.912909] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:53535353 cdw11:53535353 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.752 [2024-07-24 22:46:41.912931] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.752 [2024-07-24 22:46:41.912988] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:5353534d cdw11:53535353 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.752 [2024-07-24 22:46:41.912999] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:43.752 [2024-07-24 22:46:41.913052] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:53535353 cdw11:53535353 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.752 [2024-07-24 22:46:41.913062] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:43.752 NEW_FUNC[1/1]: 0x1a887d0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:07:43.752 #22 NEW cov: 12225 ft: 14831 corp: 19/403b lim: 40 exec/s: 0 rss: 72Mb L: 26/37 MS: 1 EraseBytes- 00:07:44.012 [2024-07-24 22:46:41.963205] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:53535353 cdw11:53535353 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.012 [2024-07-24 22:46:41.963227] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.012 [2024-07-24 22:46:41.963283] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:53535353 cdw11:53535353 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.012 [2024-07-24 22:46:41.963293] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:44.012 [2024-07-24 22:46:41.963349] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:53535353 cdw11:53535313 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.012 [2024-07-24 22:46:41.963359] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:44.012 [2024-07-24 22:46:41.963411] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:53535353 cdw11:53535353 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.012 [2024-07-24 22:46:41.963421] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:44.012 #23 NEW cov: 12225 ft: 14837 corp: 20/440b lim: 40 exec/s: 0 rss: 72Mb L: 37/37 MS: 1 ChangeBit- 00:07:44.012 [2024-07-24 22:46:42.002903] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:01000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.012 [2024-07-24 22:46:42.002925] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.012 #24 NEW cov: 12225 ft: 14869 corp: 21/455b lim: 40 exec/s: 24 rss: 72Mb L: 15/37 MS: 1 EraseBytes- 00:07:44.012 [2024-07-24 22:46:42.053442] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.012 [2024-07-24 22:46:42.053465] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.012 [2024-07-24 22:46:42.053520] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.013 [2024-07-24 22:46:42.053531] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:44.013 [2024-07-24 22:46:42.053587] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.013 [2024-07-24 22:46:42.053597] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:44.013 [2024-07-24 22:46:42.053650] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.013 [2024-07-24 22:46:42.053660] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:44.013 #25 NEW cov: 12225 ft: 14876 corp: 22/492b lim: 40 exec/s: 25 rss: 72Mb L: 37/37 MS: 1 InsertRepeatedBytes- 00:07:44.013 [2024-07-24 22:46:42.103313] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a02ffff cdw11:ffffff00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.013 [2024-07-24 22:46:42.103335] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.013 [2024-07-24 22:46:42.103407] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.013 [2024-07-24 22:46:42.103417] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:44.013 #26 NEW cov: 12225 ft: 14882 corp: 23/515b lim: 40 exec/s: 26 rss: 72Mb L: 23/37 MS: 1 CrossOver- 00:07:44.013 [2024-07-24 22:46:42.143657] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:53535353 cdw11:53535353 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.013 [2024-07-24 22:46:42.143679] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.013 [2024-07-24 22:46:42.143734] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:53535353 cdw11:53535353 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.013 [2024-07-24 22:46:42.143745] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:44.013 [2024-07-24 22:46:42.143799] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:53535353 cdw11:53535353 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.013 [2024-07-24 22:46:42.143809] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:44.013 [2024-07-24 22:46:42.143863] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:53535353 cdw11:53555353 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.013 [2024-07-24 22:46:42.143873] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:44.013 [2024-07-24 22:46:42.193797] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:53535324 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.013 [2024-07-24 22:46:42.193818] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.013 [2024-07-24 22:46:42.193889] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000053 cdw11:53535353 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.013 [2024-07-24 22:46:42.193900] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:44.013 [2024-07-24 22:46:42.193956] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:53535353 cdw11:53535353 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.013 [2024-07-24 22:46:42.193965] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:44.013 [2024-07-24 22:46:42.194022] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:53535353 cdw11:53555353 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.013 [2024-07-24 22:46:42.194032] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:44.013 #28 NEW cov: 12225 ft: 14935 corp: 24/551b lim: 40 exec/s: 28 rss: 72Mb L: 36/37 MS: 2 EraseBytes-ChangeBinInt- 00:07:44.273 [2024-07-24 22:46:42.233794] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:02fdffff cdw11:fffff4f4 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.273 [2024-07-24 22:46:42.233816] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.273 [2024-07-24 22:46:42.233887] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:f4f4f4f4 cdw11:f4f4f4f4 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.273 [2024-07-24 22:46:42.233899] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:44.273 [2024-07-24 22:46:42.233954] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:f4f4f4f4 cdw11:f4f4f4f4 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.273 [2024-07-24 22:46:42.233965] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:44.273 #29 NEW cov: 12225 ft: 14947 corp: 25/580b lim: 40 exec/s: 29 rss: 72Mb L: 29/37 MS: 1 InsertRepeatedBytes- 00:07:44.273 [2024-07-24 22:46:42.273795] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:02ffffff cdw11:ffffff02 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.273 [2024-07-24 22:46:42.273817] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.273 [2024-07-24 22:46:42.273873] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.273 [2024-07-24 22:46:42.273884] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:44.273 #30 NEW cov: 12225 ft: 14955 corp: 26/597b lim: 40 exec/s: 30 rss: 72Mb L: 17/37 MS: 1 CopyPart- 00:07:44.273 [2024-07-24 22:46:42.313755] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.273 [2024-07-24 22:46:42.313777] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.273 #31 NEW cov: 12225 ft: 14996 corp: 27/605b lim: 40 exec/s: 31 rss: 72Mb L: 8/37 MS: 1 EraseBytes- 00:07:44.273 [2024-07-24 22:46:42.364018] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00030000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.273 [2024-07-24 22:46:42.364040] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.273 [2024-07-24 22:46:42.364115] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.273 [2024-07-24 22:46:42.364127] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:44.273 #32 NEW cov: 12225 ft: 15037 corp: 28/622b lim: 40 exec/s: 32 rss: 72Mb L: 17/37 MS: 1 ChangeBinInt- 00:07:44.273 [2024-07-24 22:46:42.414427] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:53535353 cdw11:53533b53 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.273 [2024-07-24 22:46:42.414449] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.273 [2024-07-24 22:46:42.414521] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:53535353 cdw11:53535353 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.273 [2024-07-24 22:46:42.414535] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:44.273 [2024-07-24 22:46:42.414590] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:53535353 cdw11:53535353 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.273 [2024-07-24 22:46:42.414600] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:44.273 [2024-07-24 22:46:42.414655] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:53535353 cdw11:53535353 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.273 [2024-07-24 22:46:42.414666] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:44.273 #33 NEW cov: 12225 ft: 15050 corp: 29/660b lim: 40 exec/s: 33 rss: 72Mb L: 38/38 MS: 1 InsertByte- 00:07:44.273 [2024-07-24 22:46:42.454705] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:53535353 cdw11:53535353 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.273 [2024-07-24 22:46:42.454726] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.273 [2024-07-24 22:46:42.454800] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:53535353 cdw11:53535353 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.273 [2024-07-24 22:46:42.454811] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:44.273 [2024-07-24 22:46:42.454866] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:53535353 cdw11:53535353 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.273 [2024-07-24 22:46:42.454876] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:44.273 [2024-07-24 22:46:42.454932] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:53535353 cdw11:53535353 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.273 [2024-07-24 22:46:42.454943] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:44.273 [2024-07-24 22:46:42.454997] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:8 nsid:0 cdw10:53535353 cdw11:53530a0a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.273 [2024-07-24 22:46:42.455007] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:44.273 #34 NEW cov: 12225 ft: 15110 corp: 30/700b lim: 40 exec/s: 34 rss: 72Mb L: 40/40 MS: 1 CopyPart- 00:07:44.532 [2024-07-24 22:46:42.494640] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:53535353 cdw11:53533b53 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.532 [2024-07-24 22:46:42.494661] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.532 [2024-07-24 22:46:42.494735] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:53535353 cdw11:53535353 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.532 [2024-07-24 22:46:42.494746] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:44.532 [2024-07-24 22:46:42.494803] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:53535353 cdw11:53535353 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.532 [2024-07-24 22:46:42.494813] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:44.532 [2024-07-24 22:46:42.494867] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:53535353 cdw11:53535353 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.532 [2024-07-24 22:46:42.494880] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:44.532 #35 NEW cov: 12225 ft: 15122 corp: 31/738b lim: 40 exec/s: 35 rss: 72Mb L: 38/40 MS: 1 CopyPart- 00:07:44.532 [2024-07-24 22:46:42.544531] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.532 [2024-07-24 22:46:42.544553] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.532 [2024-07-24 22:46:42.544628] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.532 [2024-07-24 22:46:42.544639] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:44.532 #36 NEW cov: 12225 ft: 15129 corp: 32/755b lim: 40 exec/s: 36 rss: 72Mb L: 17/40 MS: 1 ShuffleBytes- 00:07:44.532 [2024-07-24 22:46:42.584887] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:53535353 cdw11:53535353 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.532 [2024-07-24 22:46:42.584909] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.532 [2024-07-24 22:46:42.584980] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:53535353 cdw11:53535353 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.532 [2024-07-24 22:46:42.584991] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:44.532 [2024-07-24 22:46:42.585046] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:53535353 cdw11:53535313 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.532 [2024-07-24 22:46:42.585056] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:44.532 [2024-07-24 22:46:42.585116] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:53535353 cdw11:53535353 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.532 [2024-07-24 22:46:42.585126] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:44.532 #37 NEW cov: 12225 ft: 15148 corp: 33/794b lim: 40 exec/s: 37 rss: 73Mb L: 39/40 MS: 1 CopyPart- 00:07:44.532 [2024-07-24 22:46:42.635045] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:3c3c3c3c cdw11:3c3c3c3c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.532 [2024-07-24 22:46:42.635067] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.532 [2024-07-24 22:46:42.635146] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:3c3c3c3c cdw11:3c3c3c3c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.532 [2024-07-24 22:46:42.635158] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:44.533 [2024-07-24 22:46:42.635214] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:3c3c3c3c cdw11:3c3c3c3c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.533 [2024-07-24 22:46:42.635224] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:44.533 [2024-07-24 22:46:42.635281] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:3c3c3c3c cdw11:3c3c3c3c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.533 [2024-07-24 22:46:42.635292] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:44.533 #38 NEW cov: 12225 ft: 15155 corp: 34/828b lim: 40 exec/s: 38 rss: 73Mb L: 34/40 MS: 1 InsertRepeatedBytes- 00:07:44.533 [2024-07-24 22:46:42.674890] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:0a000009 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.533 [2024-07-24 22:46:42.674912] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.533 [2024-07-24 22:46:42.674986] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.533 [2024-07-24 22:46:42.674997] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:44.533 #39 NEW cov: 12225 ft: 15157 corp: 35/845b lim: 40 exec/s: 39 rss: 73Mb L: 17/40 MS: 1 CrossOver- 00:07:44.533 [2024-07-24 22:46:42.725175] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:02fdffff cdw11:fffff4f4 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.533 [2024-07-24 22:46:42.725197] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.533 [2024-07-24 22:46:42.725269] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:fffff4f4 cdw11:f4f4f4f4 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.533 [2024-07-24 22:46:42.725281] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:44.533 [2024-07-24 22:46:42.725336] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:f4f4f4f4 cdw11:f4f4f4f4 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.533 [2024-07-24 22:46:42.725346] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:44.792 #40 NEW cov: 12225 ft: 15164 corp: 36/874b lim: 40 exec/s: 40 rss: 73Mb L: 29/40 MS: 1 CopyPart- 00:07:44.792 [2024-07-24 22:46:42.775416] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.792 [2024-07-24 22:46:42.775438] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.792 [2024-07-24 22:46:42.775513] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00800000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.792 [2024-07-24 22:46:42.775523] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:44.792 [2024-07-24 22:46:42.775580] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.792 [2024-07-24 22:46:42.775590] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:44.792 [2024-07-24 22:46:42.775644] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.792 [2024-07-24 22:46:42.775655] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:44.792 #41 NEW cov: 12225 ft: 15179 corp: 37/908b lim: 40 exec/s: 41 rss: 73Mb L: 34/40 MS: 1 ChangeBit- 00:07:44.792 [2024-07-24 22:46:42.815427] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:0a000009 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.792 [2024-07-24 22:46:42.815448] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.792 [2024-07-24 22:46:42.815521] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ff166397 cdw11:9c350da8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.792 [2024-07-24 22:46:42.815535] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:44.792 [2024-07-24 22:46:42.815591] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.792 [2024-07-24 22:46:42.815602] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:44.792 #42 NEW cov: 12225 ft: 15218 corp: 38/933b lim: 40 exec/s: 42 rss: 73Mb L: 25/40 MS: 1 CMP- DE: "\377\026c\227\2345\015\250"- 00:07:44.792 [2024-07-24 22:46:42.865654] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.792 [2024-07-24 22:46:42.865676] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.792 [2024-07-24 22:46:42.865751] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.792 [2024-07-24 22:46:42.865763] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:44.792 [2024-07-24 22:46:42.865820] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.792 [2024-07-24 22:46:42.865830] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:44.792 [2024-07-24 22:46:42.865884] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.792 [2024-07-24 22:46:42.865894] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:44.792 #43 NEW cov: 12225 ft: 15225 corp: 39/968b lim: 40 exec/s: 43 rss: 73Mb L: 35/40 MS: 1 CMP- DE: "\000\002\000\000"- 00:07:44.792 [2024-07-24 22:46:42.905555] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.792 [2024-07-24 22:46:42.905576] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.792 [2024-07-24 22:46:42.905649] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000080 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.792 [2024-07-24 22:46:42.905660] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:44.792 #44 NEW cov: 12225 ft: 15236 corp: 40/985b lim: 40 exec/s: 44 rss: 73Mb L: 17/40 MS: 1 ChangeBit- 00:07:44.792 [2024-07-24 22:46:42.946071] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:53535353 cdw11:53535353 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.792 [2024-07-24 22:46:42.946097] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.792 [2024-07-24 22:46:42.946150] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:3b535353 cdw11:53535353 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.792 [2024-07-24 22:46:42.946161] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:44.792 [2024-07-24 22:46:42.946217] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:53535353 cdw11:53535353 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.792 [2024-07-24 22:46:42.946227] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:44.792 [2024-07-24 22:46:42.946281] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:53535353 cdw11:53535353 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.792 [2024-07-24 22:46:42.946294] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:44.792 [2024-07-24 22:46:42.946349] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:8 nsid:0 cdw10:53535353 cdw11:5353530a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.792 [2024-07-24 22:46:42.946360] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:44.792 #45 NEW cov: 12225 ft: 15306 corp: 41/1025b lim: 40 exec/s: 45 rss: 73Mb L: 40/40 MS: 1 CopyPart- 00:07:44.792 [2024-07-24 22:46:42.995823] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:01000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.792 [2024-07-24 22:46:42.995846] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.792 [2024-07-24 22:46:42.995907] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00002e01 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.792 [2024-07-24 22:46:42.995919] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:45.052 #46 NEW cov: 12225 ft: 15318 corp: 42/1042b lim: 40 exec/s: 23 rss: 73Mb L: 17/40 MS: 1 ChangeByte- 00:07:45.052 #46 DONE cov: 12225 ft: 15318 corp: 42/1042b lim: 40 exec/s: 23 rss: 73Mb 00:07:45.052 ###### Recommended dictionary. ###### 00:07:45.052 "\377\026c\227\2345\015\250" # Uses: 0 00:07:45.052 "\000\002\000\000" # Uses: 0 00:07:45.052 ###### End of recommended dictionary. ###### 00:07:45.052 Done 46 runs in 2 second(s) 00:07:45.052 22:46:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_10.conf /var/tmp/suppress_nvmf_fuzz 00:07:45.052 22:46:43 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:45.052 22:46:43 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:45.052 22:46:43 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 11 1 0x1 00:07:45.052 22:46:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=11 00:07:45.052 22:46:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:45.052 22:46:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:45.052 22:46:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 00:07:45.052 22:46:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_11.conf 00:07:45.052 22:46:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:45.052 22:46:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:45.052 22:46:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 11 00:07:45.052 22:46:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4411 00:07:45.052 22:46:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 00:07:45.052 22:46:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4411' 00:07:45.052 22:46:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4411"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:45.052 22:46:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:45.052 22:46:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:45.053 22:46:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4411' -c /tmp/fuzz_json_11.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 -Z 11 00:07:45.053 [2024-07-24 22:46:43.175519] Starting SPDK v24.09-pre git sha1 f41dbc235 / DPDK 24.03.0 initialization... 00:07:45.053 [2024-07-24 22:46:43.175596] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid483118 ] 00:07:45.053 EAL: No free 2048 kB hugepages reported on node 1 00:07:45.313 [2024-07-24 22:46:43.435084] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.313 [2024-07-24 22:46:43.518057] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.573 [2024-07-24 22:46:43.576726] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:45.573 [2024-07-24 22:46:43.592970] tcp.c:1007:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4411 *** 00:07:45.573 INFO: Running with entropic power schedule (0xFF, 100). 00:07:45.573 INFO: Seed: 4028607085 00:07:45.573 INFO: Loaded 1 modules (358985 inline 8-bit counters): 358985 [0x29c620c, 0x2a1dc55), 00:07:45.573 INFO: Loaded 1 PC tables (358985 PCs): 358985 [0x2a1dc58,0x2f980e8), 00:07:45.573 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 00:07:45.573 INFO: A corpus is not provided, starting from an empty corpus 00:07:45.573 #2 INITED exec/s: 0 rss: 64Mb 00:07:45.573 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:45.573 This may also happen if the target rejected all inputs we tried so far 00:07:45.573 [2024-07-24 22:46:43.641174] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ac0affff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.573 [2024-07-24 22:46:43.641218] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.573 [2024-07-24 22:46:43.641297] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.573 [2024-07-24 22:46:43.641317] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:45.573 [2024-07-24 22:46:43.641389] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.573 [2024-07-24 22:46:43.641407] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:45.831 NEW_FUNC[1/701]: 0x492a60 in fuzz_admin_security_send_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:223 00:07:45.831 NEW_FUNC[2/701]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:45.831 #11 NEW cov: 12003 ft: 12002 corp: 2/28b lim: 40 exec/s: 0 rss: 70Mb L: 27/27 MS: 4 ShuffleBytes-InsertByte-ShuffleBytes-InsertRepeatedBytes- 00:07:45.831 [2024-07-24 22:46:43.801273] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ac0affff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.831 [2024-07-24 22:46:43.801324] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.831 #17 NEW cov: 12123 ft: 13286 corp: 3/42b lim: 40 exec/s: 0 rss: 70Mb L: 14/27 MS: 1 EraseBytes- 00:07:45.831 [2024-07-24 22:46:43.861640] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.831 [2024-07-24 22:46:43.861664] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.831 [2024-07-24 22:46:43.861723] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.831 [2024-07-24 22:46:43.861735] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:45.831 [2024-07-24 22:46:43.861791] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.831 [2024-07-24 22:46:43.861805] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:45.831 [2024-07-24 22:46:43.861860] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.831 [2024-07-24 22:46:43.861871] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:45.831 #20 NEW cov: 12129 ft: 13821 corp: 4/78b lim: 40 exec/s: 0 rss: 70Mb L: 36/36 MS: 3 CopyPart-EraseBytes-InsertRepeatedBytes- 00:07:45.831 [2024-07-24 22:46:43.901407] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ac0affff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.831 [2024-07-24 22:46:43.901430] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.831 [2024-07-24 22:46:43.901487] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.831 [2024-07-24 22:46:43.901498] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:45.831 #26 NEW cov: 12214 ft: 14294 corp: 5/97b lim: 40 exec/s: 0 rss: 70Mb L: 19/36 MS: 1 EraseBytes- 00:07:45.831 [2024-07-24 22:46:43.941529] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ac0affff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.831 [2024-07-24 22:46:43.941551] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.831 [2024-07-24 22:46:43.941610] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffff0100 cdw11:0000ffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.831 [2024-07-24 22:46:43.941621] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:45.831 #27 NEW cov: 12214 ft: 14416 corp: 6/116b lim: 40 exec/s: 0 rss: 70Mb L: 19/36 MS: 1 ChangeBinInt- 00:07:45.831 [2024-07-24 22:46:43.991658] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a969696 cdw11:96969696 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.831 [2024-07-24 22:46:43.991681] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.831 [2024-07-24 22:46:43.991756] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:96969696 cdw11:96969696 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.831 [2024-07-24 22:46:43.991768] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:45.831 #33 NEW cov: 12214 ft: 14589 corp: 7/139b lim: 40 exec/s: 0 rss: 70Mb L: 23/36 MS: 1 InsertRepeatedBytes- 00:07:45.831 [2024-07-24 22:46:44.031748] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ac0afff7 cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.831 [2024-07-24 22:46:44.031770] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.831 [2024-07-24 22:46:44.031846] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.831 [2024-07-24 22:46:44.031858] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:46.089 #34 NEW cov: 12214 ft: 14689 corp: 8/158b lim: 40 exec/s: 0 rss: 70Mb L: 19/36 MS: 1 ChangeBit- 00:07:46.089 [2024-07-24 22:46:44.071875] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ac0affff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.089 [2024-07-24 22:46:44.071899] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:46.089 [2024-07-24 22:46:44.071972] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:fffff7ff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.089 [2024-07-24 22:46:44.071983] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:46.089 #35 NEW cov: 12214 ft: 14753 corp: 9/177b lim: 40 exec/s: 0 rss: 70Mb L: 19/36 MS: 1 ChangeBit- 00:07:46.090 [2024-07-24 22:46:44.112149] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.090 [2024-07-24 22:46:44.112171] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:46.090 [2024-07-24 22:46:44.112244] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.090 [2024-07-24 22:46:44.112256] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:46.090 [2024-07-24 22:46:44.112312] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.090 [2024-07-24 22:46:44.112323] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:46.090 #36 NEW cov: 12214 ft: 14837 corp: 10/201b lim: 40 exec/s: 0 rss: 71Mb L: 24/36 MS: 1 EraseBytes- 00:07:46.090 [2024-07-24 22:46:44.162098] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ac0afff7 cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.090 [2024-07-24 22:46:44.162120] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:46.090 [2024-07-24 22:46:44.162192] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.090 [2024-07-24 22:46:44.162204] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:46.090 #37 NEW cov: 12214 ft: 14894 corp: 11/220b lim: 40 exec/s: 0 rss: 71Mb L: 19/36 MS: 1 CopyPart- 00:07:46.090 [2024-07-24 22:46:44.212279] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:08000000 cdw11:96969696 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.090 [2024-07-24 22:46:44.212301] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:46.090 [2024-07-24 22:46:44.212386] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:96969696 cdw11:96969696 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.090 [2024-07-24 22:46:44.212398] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:46.090 #38 NEW cov: 12214 ft: 14980 corp: 12/243b lim: 40 exec/s: 0 rss: 71Mb L: 23/36 MS: 1 CMP- DE: "\010\000\000\000"- 00:07:46.090 [2024-07-24 22:46:44.262400] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ac0afff7 cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.090 [2024-07-24 22:46:44.262422] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:46.090 [2024-07-24 22:46:44.262495] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffff5d SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.090 [2024-07-24 22:46:44.262507] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:46.090 #39 NEW cov: 12214 ft: 15024 corp: 13/262b lim: 40 exec/s: 0 rss: 71Mb L: 19/36 MS: 1 ChangeByte- 00:07:46.348 [2024-07-24 22:46:44.302373] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ac0affff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.348 [2024-07-24 22:46:44.302396] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:46.348 #45 NEW cov: 12214 ft: 15054 corp: 14/276b lim: 40 exec/s: 0 rss: 71Mb L: 14/36 MS: 1 EraseBytes- 00:07:46.348 [2024-07-24 22:46:44.353013] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ac0affff cdw11:d9d9d9d9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.348 [2024-07-24 22:46:44.353035] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:46.348 [2024-07-24 22:46:44.353095] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:d9d9d9d9 cdw11:d9d9d9d9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.348 [2024-07-24 22:46:44.353107] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:46.348 [2024-07-24 22:46:44.353165] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:d9d9d9d9 cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.348 [2024-07-24 22:46:44.353176] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:46.348 [2024-07-24 22:46:44.353235] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.348 [2024-07-24 22:46:44.353246] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:46.348 #46 NEW cov: 12214 ft: 15074 corp: 15/311b lim: 40 exec/s: 0 rss: 71Mb L: 35/36 MS: 1 InsertRepeatedBytes- 00:07:46.348 [2024-07-24 22:46:44.392768] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ac0affff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.348 [2024-07-24 22:46:44.392791] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:46.348 [2024-07-24 22:46:44.392851] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffff0100 cdw11:00fcffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.348 [2024-07-24 22:46:44.392862] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:46.348 #47 NEW cov: 12214 ft: 15093 corp: 16/330b lim: 40 exec/s: 0 rss: 71Mb L: 19/36 MS: 1 ChangeBinInt- 00:07:46.348 [2024-07-24 22:46:44.432883] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ac0affff cdw11:ffff01ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.348 [2024-07-24 22:46:44.432907] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:46.348 [2024-07-24 22:46:44.432969] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:010000fc SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.348 [2024-07-24 22:46:44.432980] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:46.348 #48 NEW cov: 12214 ft: 15148 corp: 17/351b lim: 40 exec/s: 0 rss: 71Mb L: 21/36 MS: 1 CopyPart- 00:07:46.348 [2024-07-24 22:46:44.483026] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:08000000 cdw11:96969696 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.349 [2024-07-24 22:46:44.483048] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:46.349 [2024-07-24 22:46:44.483108] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:96969696 cdw11:96969696 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.349 [2024-07-24 22:46:44.483122] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:46.349 #49 NEW cov: 12214 ft: 15199 corp: 18/374b lim: 40 exec/s: 0 rss: 71Mb L: 23/36 MS: 1 ShuffleBytes- 00:07:46.349 [2024-07-24 22:46:44.533536] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ac0affff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.349 [2024-07-24 22:46:44.533559] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:46.349 [2024-07-24 22:46:44.533636] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffff0100 cdw11:00fcffac SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.349 [2024-07-24 22:46:44.533648] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:46.349 [2024-07-24 22:46:44.533706] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:0affffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.349 [2024-07-24 22:46:44.533716] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:46.349 [2024-07-24 22:46:44.533774] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:ff010000 cdw11:fcffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.349 [2024-07-24 22:46:44.533785] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:46.607 NEW_FUNC[1/1]: 0x1a887d0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:07:46.607 #50 NEW cov: 12237 ft: 15240 corp: 19/411b lim: 40 exec/s: 0 rss: 72Mb L: 37/37 MS: 1 CopyPart- 00:07:46.607 [2024-07-24 22:46:44.573322] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ac0affff cdw11:ffffff00 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.607 [2024-07-24 22:46:44.573344] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:46.607 [2024-07-24 22:46:44.573402] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:1fffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.607 [2024-07-24 22:46:44.573413] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:46.607 #51 NEW cov: 12237 ft: 15248 corp: 20/432b lim: 40 exec/s: 0 rss: 72Mb L: 21/37 MS: 1 CMP- DE: "\000\037"- 00:07:46.608 [2024-07-24 22:46:44.613425] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ff001fff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.608 [2024-07-24 22:46:44.613448] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:46.608 [2024-07-24 22:46:44.613507] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.608 [2024-07-24 22:46:44.613518] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:46.608 #53 NEW cov: 12237 ft: 15269 corp: 21/450b lim: 40 exec/s: 53 rss: 72Mb L: 18/37 MS: 2 ChangeByte-CrossOver- 00:07:46.608 [2024-07-24 22:46:44.653885] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ac0affff cdw11:ffff0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.608 [2024-07-24 22:46:44.653908] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:46.608 [2024-07-24 22:46:44.653981] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.608 [2024-07-24 22:46:44.653993] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:46.608 [2024-07-24 22:46:44.654052] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:000000ff cdw11:fffffff7 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.608 [2024-07-24 22:46:44.654063] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:46.608 [2024-07-24 22:46:44.654123] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.608 [2024-07-24 22:46:44.654135] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:46.608 #54 NEW cov: 12237 ft: 15323 corp: 22/482b lim: 40 exec/s: 54 rss: 72Mb L: 32/37 MS: 1 InsertRepeatedBytes- 00:07:46.608 [2024-07-24 22:46:44.704089] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ac0affd9 cdw11:d9ffd9d9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.608 [2024-07-24 22:46:44.704112] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:46.608 [2024-07-24 22:46:44.704188] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:d9d9d9d9 cdw11:d9d9d9d9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.608 [2024-07-24 22:46:44.704200] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:46.608 [2024-07-24 22:46:44.704256] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:d9d9d9d9 cdw11:d9d9ffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.608 [2024-07-24 22:46:44.704267] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:46.608 [2024-07-24 22:46:44.704325] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.608 [2024-07-24 22:46:44.704336] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:46.608 #55 NEW cov: 12237 ft: 15346 corp: 23/519b lim: 40 exec/s: 55 rss: 72Mb L: 37/37 MS: 1 CopyPart- 00:07:46.608 [2024-07-24 22:46:44.753998] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:08000000 cdw11:96969696 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.608 [2024-07-24 22:46:44.754020] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:46.608 [2024-07-24 22:46:44.754095] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:96969696 cdw11:96969696 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.608 [2024-07-24 22:46:44.754107] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:46.608 [2024-07-24 22:46:44.754166] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:96969696 cdw11:96969696 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.608 [2024-07-24 22:46:44.754178] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:46.608 #56 NEW cov: 12237 ft: 15351 corp: 24/544b lim: 40 exec/s: 56 rss: 72Mb L: 25/37 MS: 1 CopyPart- 00:07:46.608 [2024-07-24 22:46:44.803778] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ac0affff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.608 [2024-07-24 22:46:44.803800] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:46.868 #57 NEW cov: 12237 ft: 15417 corp: 25/558b lim: 40 exec/s: 57 rss: 72Mb L: 14/37 MS: 1 ChangeBinInt- 00:07:46.868 [2024-07-24 22:46:44.854293] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:0000001f SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.868 [2024-07-24 22:46:44.854318] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:46.868 [2024-07-24 22:46:44.854376] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.868 [2024-07-24 22:46:44.854387] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:46.868 [2024-07-24 22:46:44.854446] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.868 [2024-07-24 22:46:44.854457] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:46.868 #58 NEW cov: 12237 ft: 15449 corp: 26/584b lim: 40 exec/s: 58 rss: 72Mb L: 26/37 MS: 1 PersAutoDict- DE: "\000\037"- 00:07:46.868 [2024-07-24 22:46:44.904301] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ac0affff cdw11:ffffff00 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.868 [2024-07-24 22:46:44.904334] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:46.868 [2024-07-24 22:46:44.904409] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:1fffffff cdw11:ff33a21b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.868 [2024-07-24 22:46:44.904420] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:46.868 #59 NEW cov: 12237 ft: 15467 corp: 27/605b lim: 40 exec/s: 59 rss: 72Mb L: 21/37 MS: 1 CMP- DE: "3\242\033\350\234c\027\000"- 00:07:46.868 [2024-07-24 22:46:44.954598] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.868 [2024-07-24 22:46:44.954620] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:46.868 [2024-07-24 22:46:44.954679] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:0000f800 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.868 [2024-07-24 22:46:44.954690] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:46.868 [2024-07-24 22:46:44.954749] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.868 [2024-07-24 22:46:44.954760] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:46.868 #60 NEW cov: 12237 ft: 15480 corp: 28/629b lim: 40 exec/s: 60 rss: 72Mb L: 24/37 MS: 1 ChangeBinInt- 00:07:46.868 [2024-07-24 22:46:44.994507] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ac0affff cdw11:d9d9d9d9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.868 [2024-07-24 22:46:44.994528] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:46.868 [2024-07-24 22:46:44.994605] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:d9d9d9d9 cdw11:d9d9d9d9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.868 [2024-07-24 22:46:44.994617] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:46.868 #61 NEW cov: 12237 ft: 15498 corp: 29/648b lim: 40 exec/s: 61 rss: 72Mb L: 19/37 MS: 1 CrossOver- 00:07:46.868 [2024-07-24 22:46:45.034591] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ac0affff cdw11:ffffff00 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.868 [2024-07-24 22:46:45.034614] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:46.868 [2024-07-24 22:46:45.034676] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:1fffffff cdw11:1ba29c33 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.868 [2024-07-24 22:46:45.034687] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:46.868 #62 NEW cov: 12237 ft: 15571 corp: 30/669b lim: 40 exec/s: 62 rss: 72Mb L: 21/37 MS: 1 ShuffleBytes- 00:07:47.126 [2024-07-24 22:46:45.084760] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ac0affff cdw11:ffffff00 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.126 [2024-07-24 22:46:45.084782] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:47.126 [2024-07-24 22:46:45.084858] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:1fff33ff cdw11:1bffe8ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.126 [2024-07-24 22:46:45.084869] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:47.126 #63 NEW cov: 12237 ft: 15631 corp: 31/690b lim: 40 exec/s: 63 rss: 72Mb L: 21/37 MS: 1 ShuffleBytes- 00:07:47.126 [2024-07-24 22:46:45.124861] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ac0affff cdw11:0b000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.126 [2024-07-24 22:46:45.124883] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:47.127 [2024-07-24 22:46:45.124959] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.127 [2024-07-24 22:46:45.124971] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:47.127 #64 NEW cov: 12237 ft: 15642 corp: 32/709b lim: 40 exec/s: 64 rss: 72Mb L: 19/37 MS: 1 CMP- DE: "\013\000\000\000"- 00:07:47.127 [2024-07-24 22:46:45.165178] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:0000001f SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.127 [2024-07-24 22:46:45.165199] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:47.127 [2024-07-24 22:46:45.165282] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.127 [2024-07-24 22:46:45.165293] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:47.127 [2024-07-24 22:46:45.165365] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:08000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.127 [2024-07-24 22:46:45.165377] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:47.127 #65 NEW cov: 12237 ft: 15678 corp: 33/739b lim: 40 exec/s: 65 rss: 72Mb L: 30/37 MS: 1 PersAutoDict- DE: "\010\000\000\000"- 00:07:47.127 [2024-07-24 22:46:45.214951] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:08000000 cdw11:96969696 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.127 [2024-07-24 22:46:45.214972] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:47.127 #66 NEW cov: 12237 ft: 15687 corp: 34/752b lim: 40 exec/s: 66 rss: 72Mb L: 13/37 MS: 1 EraseBytes- 00:07:47.127 [2024-07-24 22:46:45.255450] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ac0affff cdw11:ff001763 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.127 [2024-07-24 22:46:45.255472] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:47.127 [2024-07-24 22:46:45.255546] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:98cd68b0 cdw11:feffff00 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.127 [2024-07-24 22:46:45.255562] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:47.127 [2024-07-24 22:46:45.255619] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:1fffffff cdw11:1ba29c33 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.127 [2024-07-24 22:46:45.255630] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:47.127 #67 NEW cov: 12237 ft: 15707 corp: 35/781b lim: 40 exec/s: 67 rss: 72Mb L: 29/37 MS: 1 CMP- DE: "\000\027c\230\315h\260\376"- 00:07:47.127 [2024-07-24 22:46:45.305463] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a969696 cdw11:96969696 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.127 [2024-07-24 22:46:45.305485] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:47.127 [2024-07-24 22:46:45.305545] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:96969696 cdw11:96969696 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.127 [2024-07-24 22:46:45.305556] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:47.127 #68 NEW cov: 12237 ft: 15723 corp: 36/804b lim: 40 exec/s: 68 rss: 72Mb L: 23/37 MS: 1 ChangeBit- 00:07:47.386 [2024-07-24 22:46:45.345591] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ac0affff cdw11:ffffff00 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.386 [2024-07-24 22:46:45.345614] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:47.386 [2024-07-24 22:46:45.345674] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:1fffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.386 [2024-07-24 22:46:45.345691] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:47.386 #69 NEW cov: 12237 ft: 15754 corp: 37/827b lim: 40 exec/s: 69 rss: 72Mb L: 23/37 MS: 1 PersAutoDict- DE: "\000\037"- 00:07:47.386 [2024-07-24 22:46:45.385781] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:0000001f SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.386 [2024-07-24 22:46:45.385804] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:47.386 [2024-07-24 22:46:45.385863] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.386 [2024-07-24 22:46:45.385873] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:47.386 [2024-07-24 22:46:45.385932] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:08000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.386 [2024-07-24 22:46:45.385943] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:47.386 #70 NEW cov: 12237 ft: 15779 corp: 38/857b lim: 40 exec/s: 70 rss: 72Mb L: 30/37 MS: 1 CopyPart- 00:07:47.386 [2024-07-24 22:46:45.435918] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:08000000 cdw11:96969696 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.386 [2024-07-24 22:46:45.435939] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:47.386 [2024-07-24 22:46:45.435998] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:96962696 cdw11:96969696 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.386 [2024-07-24 22:46:45.436009] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:47.386 [2024-07-24 22:46:45.436090] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:96969696 cdw11:96969696 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.386 [2024-07-24 22:46:45.436102] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:47.386 #71 NEW cov: 12237 ft: 15790 corp: 39/881b lim: 40 exec/s: 71 rss: 72Mb L: 24/37 MS: 1 InsertByte- 00:07:47.386 [2024-07-24 22:46:45.476179] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.386 [2024-07-24 22:46:45.476201] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:47.386 [2024-07-24 22:46:45.476277] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffac0aff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.386 [2024-07-24 22:46:45.476289] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:47.386 [2024-07-24 22:46:45.476358] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.386 [2024-07-24 22:46:45.476369] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:47.386 [2024-07-24 22:46:45.476425] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.386 [2024-07-24 22:46:45.476435] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:47.386 #72 NEW cov: 12237 ft: 15795 corp: 40/917b lim: 40 exec/s: 72 rss: 72Mb L: 36/37 MS: 1 CopyPart- 00:07:47.386 [2024-07-24 22:46:45.516274] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a004e00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.386 [2024-07-24 22:46:45.516297] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:47.386 [2024-07-24 22:46:45.516372] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.386 [2024-07-24 22:46:45.516383] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:47.386 [2024-07-24 22:46:45.516443] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.386 [2024-07-24 22:46:45.516454] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:47.386 [2024-07-24 22:46:45.516511] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.386 [2024-07-24 22:46:45.516522] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:47.386 #73 NEW cov: 12237 ft: 15805 corp: 41/953b lim: 40 exec/s: 73 rss: 72Mb L: 36/37 MS: 1 ChangeByte- 00:07:47.386 [2024-07-24 22:46:45.555912] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ac0affff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.386 [2024-07-24 22:46:45.555934] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:47.386 #74 NEW cov: 12237 ft: 15819 corp: 42/967b lim: 40 exec/s: 74 rss: 72Mb L: 14/37 MS: 1 ShuffleBytes- 00:07:47.646 [2024-07-24 22:46:45.596571] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a004e00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.646 [2024-07-24 22:46:45.596598] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:47.646 [2024-07-24 22:46:45.596659] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.646 [2024-07-24 22:46:45.596670] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:47.646 [2024-07-24 22:46:45.596728] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.646 [2024-07-24 22:46:45.596738] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:47.646 [2024-07-24 22:46:45.596796] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.646 [2024-07-24 22:46:45.596807] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:47.646 #75 NEW cov: 12237 ft: 15833 corp: 43/1003b lim: 40 exec/s: 37 rss: 73Mb L: 36/37 MS: 1 CopyPart- 00:07:47.646 #75 DONE cov: 12237 ft: 15833 corp: 43/1003b lim: 40 exec/s: 37 rss: 73Mb 00:07:47.646 ###### Recommended dictionary. ###### 00:07:47.646 "\010\000\000\000" # Uses: 1 00:07:47.646 "\000\037" # Uses: 2 00:07:47.646 "3\242\033\350\234c\027\000" # Uses: 0 00:07:47.646 "\013\000\000\000" # Uses: 0 00:07:47.646 "\000\027c\230\315h\260\376" # Uses: 0 00:07:47.646 ###### End of recommended dictionary. ###### 00:07:47.646 Done 75 runs in 2 second(s) 00:07:47.646 22:46:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_11.conf /var/tmp/suppress_nvmf_fuzz 00:07:47.646 22:46:45 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:47.646 22:46:45 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:47.646 22:46:45 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 12 1 0x1 00:07:47.646 22:46:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=12 00:07:47.646 22:46:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:47.646 22:46:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:47.646 22:46:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 00:07:47.646 22:46:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_12.conf 00:07:47.646 22:46:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:47.646 22:46:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:47.646 22:46:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 12 00:07:47.646 22:46:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4412 00:07:47.646 22:46:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 00:07:47.646 22:46:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4412' 00:07:47.646 22:46:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4412"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:47.646 22:46:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:47.646 22:46:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:47.646 22:46:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4412' -c /tmp/fuzz_json_12.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 -Z 12 00:07:47.646 [2024-07-24 22:46:45.794899] Starting SPDK v24.09-pre git sha1 f41dbc235 / DPDK 24.03.0 initialization... 00:07:47.646 [2024-07-24 22:46:45.794971] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid483516 ] 00:07:47.646 EAL: No free 2048 kB hugepages reported on node 1 00:07:47.905 [2024-07-24 22:46:46.058652] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:48.163 [2024-07-24 22:46:46.139081] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.163 [2024-07-24 22:46:46.197676] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:48.163 [2024-07-24 22:46:46.213919] tcp.c:1007:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4412 *** 00:07:48.163 INFO: Running with entropic power schedule (0xFF, 100). 00:07:48.163 INFO: Seed: 2354683113 00:07:48.163 INFO: Loaded 1 modules (358985 inline 8-bit counters): 358985 [0x29c620c, 0x2a1dc55), 00:07:48.163 INFO: Loaded 1 PC tables (358985 PCs): 358985 [0x2a1dc58,0x2f980e8), 00:07:48.163 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 00:07:48.163 INFO: A corpus is not provided, starting from an empty corpus 00:07:48.163 #2 INITED exec/s: 0 rss: 64Mb 00:07:48.163 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:48.163 This may also happen if the target rejected all inputs we tried so far 00:07:48.163 [2024-07-24 22:46:46.269304] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0aec3636 cdw11:36363636 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.163 [2024-07-24 22:46:46.269330] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:48.422 NEW_FUNC[1/701]: 0x4947d0 in fuzz_admin_directive_send_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:241 00:07:48.422 NEW_FUNC[2/701]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:48.422 #4 NEW cov: 12008 ft: 12002 corp: 2/16b lim: 40 exec/s: 0 rss: 71Mb L: 15/15 MS: 2 InsertRepeatedBytes-InsertRepeatedBytes- 00:07:48.422 [2024-07-24 22:46:46.420238] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0b240aec cdw11:36363636 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.422 [2024-07-24 22:46:46.420291] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:48.422 [2024-07-24 22:46:46.420372] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:36363636 cdw11:ecececec SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.422 [2024-07-24 22:46:46.420392] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:48.422 #8 NEW cov: 12121 ft: 13279 corp: 3/33b lim: 40 exec/s: 0 rss: 71Mb L: 17/17 MS: 4 ChangeBit-ShuffleBytes-InsertByte-CrossOver- 00:07:48.422 [2024-07-24 22:46:46.469802] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0aec3636 cdw11:36363636 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.422 [2024-07-24 22:46:46.469826] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:48.422 #9 NEW cov: 12127 ft: 13583 corp: 4/48b lim: 40 exec/s: 0 rss: 72Mb L: 15/17 MS: 1 ChangeBit- 00:07:48.422 [2024-07-24 22:46:46.519929] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0aec3626 cdw11:36363636 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.422 [2024-07-24 22:46:46.519954] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:48.422 #10 NEW cov: 12212 ft: 13992 corp: 5/63b lim: 40 exec/s: 0 rss: 72Mb L: 15/17 MS: 1 ChangeBit- 00:07:48.422 [2024-07-24 22:46:46.560047] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0aec3636 cdw11:3b363636 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.422 [2024-07-24 22:46:46.560070] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:48.422 #11 NEW cov: 12212 ft: 14068 corp: 6/78b lim: 40 exec/s: 0 rss: 72Mb L: 15/17 MS: 1 ChangeByte- 00:07:48.422 [2024-07-24 22:46:46.600503] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0aec3626 cdw11:36363636 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.422 [2024-07-24 22:46:46.600525] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:48.422 [2024-07-24 22:46:46.600600] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:ec362636 cdw11:36363636 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.422 [2024-07-24 22:46:46.600611] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:48.422 [2024-07-24 22:46:46.600667] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:36ececec cdw11:3636ecec SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.422 [2024-07-24 22:46:46.600677] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:48.681 #12 NEW cov: 12212 ft: 14370 corp: 7/105b lim: 40 exec/s: 0 rss: 72Mb L: 27/27 MS: 1 CopyPart- 00:07:48.681 [2024-07-24 22:46:46.650252] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0aec3626 cdw11:36360400 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.681 [2024-07-24 22:46:46.650275] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:48.681 #13 NEW cov: 12212 ft: 14441 corp: 8/120b lim: 40 exec/s: 0 rss: 72Mb L: 15/27 MS: 1 CMP- DE: "\004\000\000\000"- 00:07:48.681 [2024-07-24 22:46:46.690407] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0aec3636 cdw11:36363636 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.681 [2024-07-24 22:46:46.690429] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:48.681 #14 NEW cov: 12212 ft: 14494 corp: 9/135b lim: 40 exec/s: 0 rss: 72Mb L: 15/27 MS: 1 ChangeByte- 00:07:48.681 [2024-07-24 22:46:46.740544] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0aecff16 cdw11:6399bae8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.681 [2024-07-24 22:46:46.740566] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:48.681 #15 NEW cov: 12212 ft: 14537 corp: 10/150b lim: 40 exec/s: 0 rss: 72Mb L: 15/27 MS: 1 CMP- DE: "\377\026c\231\272\350\271\360"- 00:07:48.681 [2024-07-24 22:46:46.790771] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0aec36ec cdw11:ecececec SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.681 [2024-07-24 22:46:46.790793] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:48.681 #16 NEW cov: 12212 ft: 14610 corp: 11/158b lim: 40 exec/s: 0 rss: 72Mb L: 8/27 MS: 1 EraseBytes- 00:07:48.681 [2024-07-24 22:46:46.830989] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0aec3626 cdw11:26363636 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.681 [2024-07-24 22:46:46.831012] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:48.681 [2024-07-24 22:46:46.831068] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:363636ec cdw11:ecec3636 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.681 [2024-07-24 22:46:46.831086] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:48.681 #17 NEW cov: 12212 ft: 14642 corp: 12/179b lim: 40 exec/s: 0 rss: 72Mb L: 21/27 MS: 1 EraseBytes- 00:07:48.681 [2024-07-24 22:46:46.881117] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0aec3626 cdw11:36363636 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.681 [2024-07-24 22:46:46.881143] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:48.681 [2024-07-24 22:46:46.881202] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:3636ec05 cdw11:000000ec SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.681 [2024-07-24 22:46:46.881213] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:48.940 #18 NEW cov: 12212 ft: 14700 corp: 13/198b lim: 40 exec/s: 0 rss: 72Mb L: 19/27 MS: 1 CMP- DE: "\005\000\000\000"- 00:07:48.940 [2024-07-24 22:46:46.921439] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0aec36ec cdw11:c6c6c6c6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.940 [2024-07-24 22:46:46.921461] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:48.940 [2024-07-24 22:46:46.921519] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:c6c6c6c6 cdw11:c6c6c6c6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.940 [2024-07-24 22:46:46.921530] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:48.940 [2024-07-24 22:46:46.921586] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:c6c6c6c6 cdw11:c6c6c6c6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.940 [2024-07-24 22:46:46.921596] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:48.940 #19 NEW cov: 12212 ft: 14717 corp: 14/228b lim: 40 exec/s: 0 rss: 72Mb L: 30/30 MS: 1 InsertRepeatedBytes- 00:07:48.940 [2024-07-24 22:46:46.971171] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a0792f1 cdw11:a3996317 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.940 [2024-07-24 22:46:46.971193] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:48.940 #20 NEW cov: 12212 ft: 14750 corp: 15/237b lim: 40 exec/s: 0 rss: 72Mb L: 9/30 MS: 1 CMP- DE: "\007\222\361\243\231c\027\000"- 00:07:48.940 [2024-07-24 22:46:47.011282] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0aec36ee cdw11:ecececec SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.940 [2024-07-24 22:46:47.011303] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:48.940 #21 NEW cov: 12212 ft: 14813 corp: 16/245b lim: 40 exec/s: 0 rss: 72Mb L: 8/30 MS: 1 ChangeBit- 00:07:48.940 [2024-07-24 22:46:47.051444] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:8aec36ec cdw11:ecececec SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.940 [2024-07-24 22:46:47.051466] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:48.940 #22 NEW cov: 12212 ft: 14938 corp: 17/253b lim: 40 exec/s: 0 rss: 72Mb L: 8/30 MS: 1 ChangeBit- 00:07:48.940 [2024-07-24 22:46:47.091924] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.940 [2024-07-24 22:46:47.091948] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:48.940 [2024-07-24 22:46:47.092020] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.940 [2024-07-24 22:46:47.092031] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:48.940 [2024-07-24 22:46:47.092093] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.940 [2024-07-24 22:46:47.092108] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:48.940 #24 NEW cov: 12212 ft: 14980 corp: 18/281b lim: 40 exec/s: 0 rss: 72Mb L: 28/30 MS: 2 InsertByte-InsertRepeatedBytes- 00:07:48.940 [2024-07-24 22:46:47.131844] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0aecff0a cdw11:166399ba SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.940 [2024-07-24 22:46:47.131867] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:48.940 [2024-07-24 22:46:47.131918] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:e8ec36ec cdw11:ecececec SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.940 [2024-07-24 22:46:47.131929] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:49.199 NEW_FUNC[1/1]: 0x1a887d0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:07:49.199 #25 NEW cov: 12235 ft: 15056 corp: 19/304b lim: 40 exec/s: 0 rss: 72Mb L: 23/30 MS: 1 CrossOver- 00:07:49.199 [2024-07-24 22:46:47.191830] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0aec3626 cdw11:36363636 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.199 [2024-07-24 22:46:47.191853] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:49.199 #26 NEW cov: 12235 ft: 15098 corp: 20/319b lim: 40 exec/s: 0 rss: 72Mb L: 15/30 MS: 1 ChangeASCIIInt- 00:07:49.199 [2024-07-24 22:46:47.232137] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0aecff0a cdw11:166399d2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.199 [2024-07-24 22:46:47.232160] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:49.199 [2024-07-24 22:46:47.232218] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:e8ec36ec cdw11:ecececec SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.199 [2024-07-24 22:46:47.232229] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:49.199 #27 NEW cov: 12235 ft: 15145 corp: 21/342b lim: 40 exec/s: 27 rss: 72Mb L: 23/30 MS: 1 ChangeByte- 00:07:49.199 [2024-07-24 22:46:47.282454] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0aecff16 cdw11:0aecff16 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.199 [2024-07-24 22:46:47.282477] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:49.199 [2024-07-24 22:46:47.282534] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:6399bae8 cdw11:b9f06399 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.199 [2024-07-24 22:46:47.282545] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:49.199 [2024-07-24 22:46:47.282600] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:bae8b9f0 cdw11:ecececec SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.199 [2024-07-24 22:46:47.282611] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:49.199 #28 NEW cov: 12235 ft: 15168 corp: 22/367b lim: 40 exec/s: 28 rss: 72Mb L: 25/30 MS: 1 CopyPart- 00:07:49.199 [2024-07-24 22:46:47.322177] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0aec3636 cdw11:33333333 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.199 [2024-07-24 22:46:47.322200] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:49.199 #29 NEW cov: 12235 ft: 15189 corp: 23/382b lim: 40 exec/s: 29 rss: 72Mb L: 15/30 MS: 1 ChangeASCIIInt- 00:07:49.199 [2024-07-24 22:46:47.362367] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0aec0f00 cdw11:3b363636 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.199 [2024-07-24 22:46:47.362393] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:49.199 #30 NEW cov: 12235 ft: 15196 corp: 24/397b lim: 40 exec/s: 30 rss: 72Mb L: 15/30 MS: 1 ChangeBinInt- 00:07:49.457 [2024-07-24 22:46:47.412528] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0aec3626 cdw11:3636b636 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.457 [2024-07-24 22:46:47.412549] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:49.457 #31 NEW cov: 12235 ft: 15277 corp: 25/412b lim: 40 exec/s: 31 rss: 72Mb L: 15/30 MS: 1 ChangeBit- 00:07:49.457 [2024-07-24 22:46:47.452639] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0aec33ec cdw11:ecececec SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.457 [2024-07-24 22:46:47.452661] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:49.457 #32 NEW cov: 12235 ft: 15292 corp: 26/420b lim: 40 exec/s: 32 rss: 72Mb L: 8/30 MS: 1 ChangeASCIIInt- 00:07:49.457 [2024-07-24 22:46:47.492761] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0aec3636 cdw11:36363636 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.457 [2024-07-24 22:46:47.492783] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:49.457 #33 NEW cov: 12235 ft: 15300 corp: 27/435b lim: 40 exec/s: 33 rss: 72Mb L: 15/30 MS: 1 ChangeByte- 00:07:49.457 [2024-07-24 22:46:47.542975] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0aecff0a cdw11:16ba63e8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.457 [2024-07-24 22:46:47.542997] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:49.457 [2024-07-24 22:46:47.543069] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:99ec36ec cdw11:ecececec SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.457 [2024-07-24 22:46:47.543086] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:49.457 #34 NEW cov: 12235 ft: 15312 corp: 28/458b lim: 40 exec/s: 34 rss: 72Mb L: 23/30 MS: 1 ShuffleBytes- 00:07:49.457 [2024-07-24 22:46:47.583002] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0aec3636 cdw11:33333333 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.457 [2024-07-24 22:46:47.583025] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:49.457 #35 NEW cov: 12235 ft: 15345 corp: 29/473b lim: 40 exec/s: 35 rss: 72Mb L: 15/30 MS: 1 ChangeASCIIInt- 00:07:49.458 [2024-07-24 22:46:47.633296] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0aecff0a cdw11:166399ba SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.458 [2024-07-24 22:46:47.633319] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:49.458 [2024-07-24 22:46:47.633376] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:e8ec36ec cdw11:ecececec SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.458 [2024-07-24 22:46:47.633387] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:49.458 #36 NEW cov: 12235 ft: 15354 corp: 30/491b lim: 40 exec/s: 36 rss: 72Mb L: 18/30 MS: 1 EraseBytes- 00:07:49.716 [2024-07-24 22:46:47.673250] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0aec3626 cdw11:36363636 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.716 [2024-07-24 22:46:47.673272] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:49.716 #37 NEW cov: 12235 ft: 15420 corp: 31/506b lim: 40 exec/s: 37 rss: 72Mb L: 15/30 MS: 1 ChangeByte- 00:07:49.716 [2024-07-24 22:46:47.713539] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0aec3636 cdw11:0aec3636 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.716 [2024-07-24 22:46:47.713561] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:49.716 [2024-07-24 22:46:47.713620] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:33333333 cdw11:33333333 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.716 [2024-07-24 22:46:47.713631] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:49.716 #43 NEW cov: 12235 ft: 15426 corp: 32/528b lim: 40 exec/s: 43 rss: 72Mb L: 22/30 MS: 1 CopyPart- 00:07:49.716 [2024-07-24 22:46:47.753434] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0792f1a3 cdw11:99631700 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.716 [2024-07-24 22:46:47.753456] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:49.716 #47 NEW cov: 12235 ft: 15431 corp: 33/540b lim: 40 exec/s: 47 rss: 72Mb L: 12/30 MS: 4 InsertByte-InsertByte-InsertByte-PersAutoDict- DE: "\007\222\361\243\231c\027\000"- 00:07:49.716 [2024-07-24 22:46:47.793561] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:ec0a3636 cdw11:ecececec SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.716 [2024-07-24 22:46:47.793583] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:49.716 #51 NEW cov: 12235 ft: 15446 corp: 34/548b lim: 40 exec/s: 51 rss: 73Mb L: 8/30 MS: 4 EraseBytes-CrossOver-CrossOver-InsertByte- 00:07:49.716 [2024-07-24 22:46:47.843847] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0aecff16 cdw11:6399bae8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.716 [2024-07-24 22:46:47.843869] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:49.716 [2024-07-24 22:46:47.843927] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:b9f0ecec cdw11:ecececec SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.716 [2024-07-24 22:46:47.843938] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:49.716 #52 NEW cov: 12235 ft: 15457 corp: 35/564b lim: 40 exec/s: 52 rss: 73Mb L: 16/30 MS: 1 CrossOver- 00:07:49.716 [2024-07-24 22:46:47.883804] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0aec3636 cdw11:36363636 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.716 [2024-07-24 22:46:47.883826] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:49.716 #53 NEW cov: 12235 ft: 15462 corp: 36/579b lim: 40 exec/s: 53 rss: 73Mb L: 15/30 MS: 1 ChangeBit- 00:07:49.976 [2024-07-24 22:46:47.923925] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:263636b6 cdw11:363636ec SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.976 [2024-07-24 22:46:47.923948] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:49.976 #54 NEW cov: 12235 ft: 15468 corp: 37/591b lim: 40 exec/s: 54 rss: 73Mb L: 12/30 MS: 1 EraseBytes- 00:07:49.976 [2024-07-24 22:46:47.974068] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0aecec36 cdw11:ee36eeec SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.976 [2024-07-24 22:46:47.974099] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:49.976 #55 NEW cov: 12235 ft: 15479 corp: 38/602b lim: 40 exec/s: 55 rss: 73Mb L: 11/30 MS: 1 CopyPart- 00:07:49.976 [2024-07-24 22:46:48.024178] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0aec3636 cdw11:3636ecec SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.976 [2024-07-24 22:46:48.024203] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:49.976 #56 NEW cov: 12235 ft: 15484 corp: 39/610b lim: 40 exec/s: 56 rss: 73Mb L: 8/30 MS: 1 EraseBytes- 00:07:49.976 [2024-07-24 22:46:48.074529] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0aecff0a cdw11:166399ba SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.976 [2024-07-24 22:46:48.074552] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:49.976 [2024-07-24 22:46:48.074613] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:e8ec36ec cdw11:ecececec SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.976 [2024-07-24 22:46:48.074624] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:49.976 #57 NEW cov: 12235 ft: 15494 corp: 40/628b lim: 40 exec/s: 57 rss: 73Mb L: 18/30 MS: 1 CrossOver- 00:07:49.976 [2024-07-24 22:46:48.124496] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a6399ba cdw11:e8b9f0ec SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.976 [2024-07-24 22:46:48.124518] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:49.976 #58 NEW cov: 12235 ft: 15500 corp: 41/641b lim: 40 exec/s: 58 rss: 73Mb L: 13/30 MS: 1 EraseBytes- 00:07:49.976 [2024-07-24 22:46:48.175000] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ff04ffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.976 [2024-07-24 22:46:48.175023] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:49.976 [2024-07-24 22:46:48.175083] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.976 [2024-07-24 22:46:48.175095] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:49.976 [2024-07-24 22:46:48.175152] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.976 [2024-07-24 22:46:48.175163] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:50.235 #59 NEW cov: 12235 ft: 15549 corp: 42/669b lim: 40 exec/s: 59 rss: 73Mb L: 28/30 MS: 1 ChangeByte- 00:07:50.235 [2024-07-24 22:46:48.225130] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0aec36ff cdw11:166399ba SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:50.235 [2024-07-24 22:46:48.225152] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.235 [2024-07-24 22:46:48.225205] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:e8b9f026 cdw11:36363636 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:50.235 [2024-07-24 22:46:48.225216] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:50.235 [2024-07-24 22:46:48.225271] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:3636ec05 cdw11:000000ec SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:50.235 [2024-07-24 22:46:48.225282] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:50.235 #60 NEW cov: 12235 ft: 15574 corp: 43/696b lim: 40 exec/s: 30 rss: 73Mb L: 27/30 MS: 1 PersAutoDict- DE: "\377\026c\231\272\350\271\360"- 00:07:50.235 #60 DONE cov: 12235 ft: 15574 corp: 43/696b lim: 40 exec/s: 30 rss: 73Mb 00:07:50.235 ###### Recommended dictionary. ###### 00:07:50.235 "\004\000\000\000" # Uses: 0 00:07:50.235 "\377\026c\231\272\350\271\360" # Uses: 1 00:07:50.235 "\005\000\000\000" # Uses: 1 00:07:50.235 "\007\222\361\243\231c\027\000" # Uses: 1 00:07:50.235 ###### End of recommended dictionary. ###### 00:07:50.235 Done 60 runs in 2 second(s) 00:07:50.235 22:46:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_12.conf /var/tmp/suppress_nvmf_fuzz 00:07:50.235 22:46:48 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:50.235 22:46:48 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:50.235 22:46:48 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 13 1 0x1 00:07:50.235 22:46:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=13 00:07:50.235 22:46:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:50.235 22:46:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:50.235 22:46:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 00:07:50.235 22:46:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_13.conf 00:07:50.235 22:46:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:50.235 22:46:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:50.235 22:46:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 13 00:07:50.235 22:46:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4413 00:07:50.235 22:46:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 00:07:50.235 22:46:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4413' 00:07:50.235 22:46:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4413"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:50.235 22:46:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:50.235 22:46:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:50.235 22:46:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4413' -c /tmp/fuzz_json_13.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 -Z 13 00:07:50.235 [2024-07-24 22:46:48.417591] Starting SPDK v24.09-pre git sha1 f41dbc235 / DPDK 24.03.0 initialization... 00:07:50.235 [2024-07-24 22:46:48.417658] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid483932 ] 00:07:50.494 EAL: No free 2048 kB hugepages reported on node 1 00:07:50.494 [2024-07-24 22:46:48.680870] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.753 [2024-07-24 22:46:48.755193] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.753 [2024-07-24 22:46:48.813787] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:50.753 [2024-07-24 22:46:48.830023] tcp.c:1007:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4413 *** 00:07:50.753 INFO: Running with entropic power schedule (0xFF, 100). 00:07:50.753 INFO: Seed: 675666025 00:07:50.753 INFO: Loaded 1 modules (358985 inline 8-bit counters): 358985 [0x29c620c, 0x2a1dc55), 00:07:50.753 INFO: Loaded 1 PC tables (358985 PCs): 358985 [0x2a1dc58,0x2f980e8), 00:07:50.753 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 00:07:50.753 INFO: A corpus is not provided, starting from an empty corpus 00:07:50.753 #2 INITED exec/s: 0 rss: 65Mb 00:07:50.753 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:50.753 This may also happen if the target rejected all inputs we tried so far 00:07:50.753 [2024-07-24 22:46:48.874668] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:7ef00a48 cdw11:f0f0f0f1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.753 [2024-07-24 22:46:48.874704] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:51.012 NEW_FUNC[1/699]: 0x496390 in fuzz_admin_directive_receive_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:257 00:07:51.012 NEW_FUNC[2/699]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:51.012 #22 NEW cov: 11993 ft: 11995 corp: 2/9b lim: 40 exec/s: 0 rss: 71Mb L: 8/8 MS: 5 InsertRepeatedBytes-CrossOver-ChangeBit-ChangeByte-InsertByte- 00:07:51.013 [2024-07-24 22:46:49.055208] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:7ef00a48 cdw11:f0f0caca SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.013 [2024-07-24 22:46:49.055248] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:51.013 [2024-07-24 22:46:49.055280] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:cacacaca cdw11:cacacaca SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.013 [2024-07-24 22:46:49.055293] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:51.013 [2024-07-24 22:46:49.055320] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:cacacaca cdw11:cacacaf0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.013 [2024-07-24 22:46:49.055332] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:51.013 NEW_FUNC[1/1]: 0x1d9ede0 in spdk_thread_is_exited /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:736 00:07:51.013 #23 NEW cov: 12109 ft: 12960 corp: 3/34b lim: 40 exec/s: 0 rss: 71Mb L: 25/25 MS: 1 InsertRepeatedBytes- 00:07:51.013 [2024-07-24 22:46:49.145329] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:7ef00a48 cdw11:f0f0caca SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.013 [2024-07-24 22:46:49.145360] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:51.013 [2024-07-24 22:46:49.145406] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:cacacaca cdw11:cacacaca SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.013 [2024-07-24 22:46:49.145419] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:51.013 [2024-07-24 22:46:49.145445] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:cacacaca cdw11:cacacaf0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.013 [2024-07-24 22:46:49.145458] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:51.013 #24 NEW cov: 12115 ft: 13224 corp: 4/60b lim: 40 exec/s: 0 rss: 71Mb L: 26/26 MS: 1 InsertByte- 00:07:51.270 [2024-07-24 22:46:49.225564] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:7ef00a48 cdw11:f0f0caca SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.270 [2024-07-24 22:46:49.225593] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:51.270 [2024-07-24 22:46:49.225624] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:cacaca2f cdw11:cacacaca SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.270 [2024-07-24 22:46:49.225636] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:51.270 [2024-07-24 22:46:49.225663] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:cacacaca cdw11:cacacaf0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.270 [2024-07-24 22:46:49.225675] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:51.270 #30 NEW cov: 12200 ft: 13531 corp: 5/85b lim: 40 exec/s: 0 rss: 71Mb L: 25/26 MS: 1 ChangeBinInt- 00:07:51.270 [2024-07-24 22:46:49.285621] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:7ef00a48 cdw11:f0cacaf1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.270 [2024-07-24 22:46:49.285648] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:51.270 #31 NEW cov: 12200 ft: 13651 corp: 6/93b lim: 40 exec/s: 0 rss: 71Mb L: 8/26 MS: 1 CrossOver- 00:07:51.270 [2024-07-24 22:46:49.346429] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:7ef00a48 cdw11:f0f0ca7e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.270 [2024-07-24 22:46:49.346452] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:51.270 #32 NEW cov: 12200 ft: 13816 corp: 7/105b lim: 40 exec/s: 0 rss: 71Mb L: 12/26 MS: 1 CrossOver- 00:07:51.270 [2024-07-24 22:46:49.396598] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.270 [2024-07-24 22:46:49.396621] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:51.270 #33 NEW cov: 12200 ft: 13958 corp: 8/116b lim: 40 exec/s: 0 rss: 71Mb L: 11/26 MS: 1 InsertRepeatedBytes- 00:07:51.270 [2024-07-24 22:46:49.436841] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a6a6a6a cdw11:6a6a6a6a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.270 [2024-07-24 22:46:49.436864] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:51.270 [2024-07-24 22:46:49.436937] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:6a6a6a6a cdw11:6a6a6a6a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.270 [2024-07-24 22:46:49.436948] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:51.270 #34 NEW cov: 12200 ft: 14148 corp: 9/139b lim: 40 exec/s: 0 rss: 71Mb L: 23/26 MS: 1 InsertRepeatedBytes- 00:07:51.529 [2024-07-24 22:46:49.477101] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:7ef00a48 cdw11:f0f0caca SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.529 [2024-07-24 22:46:49.477126] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:51.529 [2024-07-24 22:46:49.477182] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:cacacaca cdw11:cacacaca SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.529 [2024-07-24 22:46:49.477193] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:51.529 [2024-07-24 22:46:49.477249] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:cac2caca cdw11:cacacaf0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.529 [2024-07-24 22:46:49.477260] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:51.529 #35 NEW cov: 12200 ft: 14199 corp: 10/165b lim: 40 exec/s: 0 rss: 71Mb L: 26/26 MS: 1 ChangeBit- 00:07:51.529 [2024-07-24 22:46:49.516943] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:7ef0f0f0 cdw11:ca7ef00a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.529 [2024-07-24 22:46:49.516965] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:51.529 #36 NEW cov: 12200 ft: 14243 corp: 11/175b lim: 40 exec/s: 0 rss: 71Mb L: 10/26 MS: 1 EraseBytes- 00:07:51.529 [2024-07-24 22:46:49.567360] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a6a6a6a cdw11:6a6a6a6a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.529 [2024-07-24 22:46:49.567385] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:51.529 [2024-07-24 22:46:49.567443] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:6a6a6a6a cdw11:6a6a6a6a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.529 [2024-07-24 22:46:49.567454] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:51.529 [2024-07-24 22:46:49.567508] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:6a6a6a6a cdw11:6a6a2d6a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.529 [2024-07-24 22:46:49.567518] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:51.529 #37 NEW cov: 12200 ft: 14276 corp: 12/199b lim: 40 exec/s: 0 rss: 71Mb L: 24/26 MS: 1 InsertByte- 00:07:51.529 [2024-07-24 22:46:49.617497] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:7ef00a48 cdw11:f0f0caca SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.529 [2024-07-24 22:46:49.617520] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:51.529 [2024-07-24 22:46:49.617576] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:cbcacaca cdw11:cacacaca SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.529 [2024-07-24 22:46:49.617587] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:51.529 [2024-07-24 22:46:49.617641] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:cacacaca cdw11:cacacaf0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.529 [2024-07-24 22:46:49.617653] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:51.529 #38 NEW cov: 12200 ft: 14287 corp: 13/224b lim: 40 exec/s: 0 rss: 71Mb L: 25/26 MS: 1 ChangeBit- 00:07:51.529 [2024-07-24 22:46:49.657628] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:7ef00a48 cdw11:f0f0caca SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.529 [2024-07-24 22:46:49.657651] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:51.529 [2024-07-24 22:46:49.657723] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:cbcacaca cdw11:cacacacd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.529 [2024-07-24 22:46:49.657734] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:51.529 [2024-07-24 22:46:49.657789] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:cacacaca cdw11:cacacaf0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.529 [2024-07-24 22:46:49.657800] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:51.529 #39 NEW cov: 12200 ft: 14310 corp: 14/249b lim: 40 exec/s: 0 rss: 71Mb L: 25/26 MS: 1 ChangeBinInt- 00:07:51.529 [2024-07-24 22:46:49.707760] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:7ef00a48 cdw11:f0f0caca SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.529 [2024-07-24 22:46:49.707784] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:51.529 [2024-07-24 22:46:49.707839] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:cacacaca cdw11:cacaf0f0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.529 [2024-07-24 22:46:49.707851] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:51.529 [2024-07-24 22:46:49.707908] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:cacacaca cdw11:cacacaca SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.529 [2024-07-24 22:46:49.707922] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:51.529 #40 NEW cov: 12200 ft: 14318 corp: 15/275b lim: 40 exec/s: 0 rss: 71Mb L: 26/26 MS: 1 CopyPart- 00:07:51.788 [2024-07-24 22:46:49.747833] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:7ef00a48 cdw11:f0f0caca SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.788 [2024-07-24 22:46:49.747855] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:51.788 [2024-07-24 22:46:49.747910] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:cacacaca cdw11:cacaf0f0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.788 [2024-07-24 22:46:49.747921] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:51.788 [2024-07-24 22:46:49.747976] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:cacacaca cdw11:cacaceca SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.788 [2024-07-24 22:46:49.747987] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:51.788 NEW_FUNC[1/1]: 0x1a887d0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:07:51.788 #41 NEW cov: 12217 ft: 14448 corp: 16/301b lim: 40 exec/s: 0 rss: 72Mb L: 26/26 MS: 1 ChangeBit- 00:07:51.788 [2024-07-24 22:46:49.797858] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:7ef00af0 cdw11:0a48f0f0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.788 [2024-07-24 22:46:49.797882] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:51.788 [2024-07-24 22:46:49.797939] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ca48f0f0 cdw11:ca7ef00a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.788 [2024-07-24 22:46:49.797950] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:51.788 #42 NEW cov: 12217 ft: 14458 corp: 17/319b lim: 40 exec/s: 0 rss: 72Mb L: 18/26 MS: 1 CopyPart- 00:07:51.788 [2024-07-24 22:46:49.838182] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:7ef00a48 cdw11:21f0f0ca SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.788 [2024-07-24 22:46:49.838206] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:51.788 [2024-07-24 22:46:49.838278] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:cacacaca cdw11:cacacaca SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.788 [2024-07-24 22:46:49.838289] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:51.788 [2024-07-24 22:46:49.838343] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:cacacaca cdw11:cacacaca SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.788 [2024-07-24 22:46:49.838353] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:51.788 #43 NEW cov: 12217 ft: 14473 corp: 18/345b lim: 40 exec/s: 43 rss: 72Mb L: 26/26 MS: 1 InsertByte- 00:07:51.788 [2024-07-24 22:46:49.877953] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:7ef00a48 cdw11:f0f0caca SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.788 [2024-07-24 22:46:49.877976] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:51.788 #44 NEW cov: 12217 ft: 14549 corp: 19/360b lim: 40 exec/s: 44 rss: 72Mb L: 15/26 MS: 1 EraseBytes- 00:07:51.788 [2024-07-24 22:46:49.928363] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:7ef00a48 cdw11:21f0f0ca SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.788 [2024-07-24 22:46:49.928389] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:51.788 [2024-07-24 22:46:49.928445] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:cacacaca cdw11:cacacaca SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.788 [2024-07-24 22:46:49.928456] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:51.788 [2024-07-24 22:46:49.928511] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:cacacaca cdw11:cacacaca SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.788 [2024-07-24 22:46:49.928521] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:51.788 #45 NEW cov: 12217 ft: 14612 corp: 20/386b lim: 40 exec/s: 45 rss: 72Mb L: 26/26 MS: 1 ShuffleBytes- 00:07:51.788 [2024-07-24 22:46:49.978745] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:7ef00af0 cdw11:0a48f0f0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.788 [2024-07-24 22:46:49.978768] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:51.788 [2024-07-24 22:46:49.978823] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ca48f0f0 cdw11:caffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.788 [2024-07-24 22:46:49.978834] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:51.788 [2024-07-24 22:46:49.978889] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.788 [2024-07-24 22:46:49.978899] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:51.788 [2024-07-24 22:46:49.978954] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.788 [2024-07-24 22:46:49.978964] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:51.788 [2024-07-24 22:46:49.979020] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:8 nsid:0 cdw10:ffffff7e cdw11:f00aca48 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.788 [2024-07-24 22:46:49.979032] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:52.047 #46 NEW cov: 12217 ft: 15103 corp: 21/426b lim: 40 exec/s: 46 rss: 72Mb L: 40/40 MS: 1 InsertRepeatedBytes- 00:07:52.047 [2024-07-24 22:46:50.038811] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:7ef00a48 cdw11:f0f0caca SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.047 [2024-07-24 22:46:50.038840] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:52.047 [2024-07-24 22:46:50.038896] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:cacacaca cdw11:cacacaca SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.047 [2024-07-24 22:46:50.038908] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:52.047 [2024-07-24 22:46:50.038963] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:cac2caca cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.047 [2024-07-24 22:46:50.038974] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:52.047 [2024-07-24 22:46:50.039031] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:cacacaf0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.047 [2024-07-24 22:46:50.039045] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:52.047 #47 NEW cov: 12217 ft: 15124 corp: 22/460b lim: 40 exec/s: 47 rss: 72Mb L: 34/40 MS: 1 InsertRepeatedBytes- 00:07:52.047 [2024-07-24 22:46:50.088820] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a6a6a6a cdw11:6a6a6a6a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.047 [2024-07-24 22:46:50.088853] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:52.047 [2024-07-24 22:46:50.088910] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:6a6a6a6a cdw11:6a6a006a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.047 [2024-07-24 22:46:50.088921] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:52.047 [2024-07-24 22:46:50.088975] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:6a6a6a6a cdw11:6a6a2d6a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.047 [2024-07-24 22:46:50.088986] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:52.047 #48 NEW cov: 12217 ft: 15139 corp: 23/484b lim: 40 exec/s: 48 rss: 72Mb L: 24/40 MS: 1 ChangeByte- 00:07:52.047 [2024-07-24 22:46:50.138707] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:f0f0ca7e cdw11:f00aca48 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.047 [2024-07-24 22:46:50.138731] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:52.047 #49 NEW cov: 12217 ft: 15156 corp: 24/492b lim: 40 exec/s: 49 rss: 72Mb L: 8/40 MS: 1 EraseBytes- 00:07:52.047 [2024-07-24 22:46:50.189091] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:7ef00a48 cdw11:21f0f0ca SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.047 [2024-07-24 22:46:50.189114] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:52.047 [2024-07-24 22:46:50.189188] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:cacacaca cdw11:cacacaca SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.047 [2024-07-24 22:46:50.189199] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:52.047 [2024-07-24 22:46:50.189252] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:27cacaca cdw11:cacacaca SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.047 [2024-07-24 22:46:50.189263] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:52.047 #50 NEW cov: 12217 ft: 15170 corp: 25/518b lim: 40 exec/s: 50 rss: 72Mb L: 26/40 MS: 1 ChangeByte- 00:07:52.047 [2024-07-24 22:46:50.239267] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:7ef00a48 cdw11:f0f0caca SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.048 [2024-07-24 22:46:50.239289] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:52.048 [2024-07-24 22:46:50.239363] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:cacacaca cdw11:cacaf0f0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.048 [2024-07-24 22:46:50.239375] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:52.048 [2024-07-24 22:46:50.239434] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:cacacaca cdw11:cacacaca SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.048 [2024-07-24 22:46:50.239444] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:52.306 #51 NEW cov: 12217 ft: 15184 corp: 26/547b lim: 40 exec/s: 51 rss: 72Mb L: 29/40 MS: 1 CopyPart- 00:07:52.307 [2024-07-24 22:46:50.279360] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a6a6a6a cdw11:6a8f6a6a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.307 [2024-07-24 22:46:50.279382] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:52.307 [2024-07-24 22:46:50.279454] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:6a6a6a6a cdw11:6a6a006a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.307 [2024-07-24 22:46:50.279465] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:52.307 [2024-07-24 22:46:50.279521] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:6a6a6a6a cdw11:6a6a2d6a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.307 [2024-07-24 22:46:50.279532] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:52.307 #52 NEW cov: 12217 ft: 15194 corp: 27/571b lim: 40 exec/s: 52 rss: 72Mb L: 24/40 MS: 1 ChangeBinInt- 00:07:52.307 [2024-07-24 22:46:50.329527] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:7ecacaca cdw11:cacacacd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.307 [2024-07-24 22:46:50.329550] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:52.307 [2024-07-24 22:46:50.329607] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:cacacaca cdw11:cacacacd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.307 [2024-07-24 22:46:50.329618] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:52.307 [2024-07-24 22:46:50.329674] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:cacacaca cdw11:cacacaf0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.307 [2024-07-24 22:46:50.329685] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:52.307 #53 NEW cov: 12217 ft: 15202 corp: 28/596b lim: 40 exec/s: 53 rss: 72Mb L: 25/40 MS: 1 CopyPart- 00:07:52.307 [2024-07-24 22:46:50.379761] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a6a6a6a cdw11:6a6a6a6a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.307 [2024-07-24 22:46:50.379783] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:52.307 [2024-07-24 22:46:50.379856] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:6a6a6a6a cdw11:6a6a6a6a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.307 [2024-07-24 22:46:50.379867] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:52.307 [2024-07-24 22:46:50.379923] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:6a6a6a6a cdw11:13131313 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.307 [2024-07-24 22:46:50.379933] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:52.307 [2024-07-24 22:46:50.379989] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:13131313 cdw11:13136a6a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.307 [2024-07-24 22:46:50.380000] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:52.307 #54 NEW cov: 12217 ft: 15209 corp: 29/629b lim: 40 exec/s: 54 rss: 72Mb L: 33/40 MS: 1 InsertRepeatedBytes- 00:07:52.307 [2024-07-24 22:46:50.419732] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:7ef00a48 cdw11:21f0f0ca SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.307 [2024-07-24 22:46:50.419757] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:52.307 [2024-07-24 22:46:50.419816] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:cacacaca cdw11:cacacaca SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.307 [2024-07-24 22:46:50.419827] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:52.307 [2024-07-24 22:46:50.419883] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:cacacaca cdw11:23cacaca SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.307 [2024-07-24 22:46:50.419894] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:52.307 #55 NEW cov: 12217 ft: 15219 corp: 30/655b lim: 40 exec/s: 55 rss: 72Mb L: 26/40 MS: 1 ChangeByte- 00:07:52.307 [2024-07-24 22:46:50.459592] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:7ff00a48 cdw11:f0f0ca7e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.307 [2024-07-24 22:46:50.459614] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:52.307 #56 NEW cov: 12217 ft: 15242 corp: 31/667b lim: 40 exec/s: 56 rss: 72Mb L: 12/40 MS: 1 ChangeByte- 00:07:52.307 [2024-07-24 22:46:50.499841] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a6a6a6a cdw11:6a6a6a6a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.307 [2024-07-24 22:46:50.499863] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:52.307 [2024-07-24 22:46:50.499934] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:006a6a6a cdw11:6a6a6a6a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.307 [2024-07-24 22:46:50.499946] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:52.568 #57 NEW cov: 12217 ft: 15255 corp: 32/685b lim: 40 exec/s: 57 rss: 72Mb L: 18/40 MS: 1 EraseBytes- 00:07:52.568 [2024-07-24 22:46:50.540115] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:7ecacaca cdw11:cacacacd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.568 [2024-07-24 22:46:50.540138] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:52.568 [2024-07-24 22:46:50.540193] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:cacacaca cdw11:cacacacd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.568 [2024-07-24 22:46:50.540204] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:52.568 [2024-07-24 22:46:50.540260] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:cacacadc cdw11:cacacaf0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.568 [2024-07-24 22:46:50.540271] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:52.568 #58 NEW cov: 12217 ft: 15263 corp: 33/710b lim: 40 exec/s: 58 rss: 72Mb L: 25/40 MS: 1 ChangeByte- 00:07:52.568 [2024-07-24 22:46:50.590237] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:7ef00a48 cdw11:21f0f0ca SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.568 [2024-07-24 22:46:50.590259] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:52.568 [2024-07-24 22:46:50.590334] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:cacacaca cdw11:cacacaca SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.568 [2024-07-24 22:46:50.590345] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:52.568 [2024-07-24 22:46:50.590406] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:cacaca36 cdw11:35cacaca SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.568 [2024-07-24 22:46:50.590416] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:52.568 #59 NEW cov: 12217 ft: 15274 corp: 34/736b lim: 40 exec/s: 59 rss: 72Mb L: 26/40 MS: 1 ChangeBinInt- 00:07:52.568 [2024-07-24 22:46:50.630354] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:7ef00a48 cdw11:f0f0caca SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.568 [2024-07-24 22:46:50.630376] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:52.568 [2024-07-24 22:46:50.630449] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:cacacaca cdw11:cacacaca SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.568 [2024-07-24 22:46:50.630460] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:52.568 [2024-07-24 22:46:50.630515] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:cacaffff cdw11:ffffffca SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.568 [2024-07-24 22:46:50.630526] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:52.568 #60 NEW cov: 12217 ft: 15294 corp: 35/767b lim: 40 exec/s: 60 rss: 72Mb L: 31/40 MS: 1 InsertRepeatedBytes- 00:07:52.568 [2024-07-24 22:46:50.670496] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:7ef00a48 cdw11:21f0f0ca SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.568 [2024-07-24 22:46:50.670518] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:52.568 [2024-07-24 22:46:50.670592] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:cacacaca cdw11:cacacaca SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.568 [2024-07-24 22:46:50.670603] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:52.568 [2024-07-24 22:46:50.670661] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:cacaca36 cdw11:36cacaca SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.568 [2024-07-24 22:46:50.670671] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:52.568 #61 NEW cov: 12217 ft: 15309 corp: 36/793b lim: 40 exec/s: 61 rss: 72Mb L: 26/40 MS: 1 ChangeASCIIInt- 00:07:52.568 [2024-07-24 22:46:50.720658] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:7ef00a48 cdw11:21f0f0ca SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.568 [2024-07-24 22:46:50.720682] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:52.569 [2024-07-24 22:46:50.720738] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:cacacaca cdw11:cacacaca SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.569 [2024-07-24 22:46:50.720749] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:52.569 [2024-07-24 22:46:50.720807] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:cacaca36 cdw11:35cacaca SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.569 [2024-07-24 22:46:50.720817] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:52.569 #62 NEW cov: 12217 ft: 15318 corp: 37/819b lim: 40 exec/s: 62 rss: 72Mb L: 26/40 MS: 1 ShuffleBytes- 00:07:52.569 [2024-07-24 22:46:50.760743] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a6a6a6a cdw11:6a6a6a6a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.569 [2024-07-24 22:46:50.760769] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:52.569 [2024-07-24 22:46:50.760842] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:6a6a6a6a cdw11:6a6a186a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.569 [2024-07-24 22:46:50.760853] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:52.569 [2024-07-24 22:46:50.760909] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:6a6a6a6a cdw11:6a6a2d6a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.569 [2024-07-24 22:46:50.760919] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:52.828 #63 NEW cov: 12224 ft: 15358 corp: 38/843b lim: 40 exec/s: 63 rss: 72Mb L: 24/40 MS: 1 ChangeBinInt- 00:07:52.828 [2024-07-24 22:46:50.800878] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:44444444 cdw11:44444444 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.829 [2024-07-24 22:46:50.800901] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:52.829 [2024-07-24 22:46:50.800973] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:44444444 cdw11:44444444 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.829 [2024-07-24 22:46:50.800984] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:52.829 [2024-07-24 22:46:50.801038] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:44444444 cdw11:44444444 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.829 [2024-07-24 22:46:50.801048] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:52.829 #68 NEW cov: 12224 ft: 15366 corp: 39/872b lim: 40 exec/s: 68 rss: 72Mb L: 29/40 MS: 5 ShuffleBytes-ChangeByte-ChangeByte-ShuffleBytes-InsertRepeatedBytes- 00:07:52.829 [2024-07-24 22:46:50.840733] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:0000ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.829 [2024-07-24 22:46:50.840755] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:52.829 #69 NEW cov: 12224 ft: 15367 corp: 40/883b lim: 40 exec/s: 34 rss: 72Mb L: 11/40 MS: 1 ChangeBinInt- 00:07:52.829 #69 DONE cov: 12224 ft: 15367 corp: 40/883b lim: 40 exec/s: 34 rss: 72Mb 00:07:52.829 Done 69 runs in 2 second(s) 00:07:52.829 22:46:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_13.conf /var/tmp/suppress_nvmf_fuzz 00:07:52.829 22:46:50 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:52.829 22:46:50 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:52.829 22:46:50 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 14 1 0x1 00:07:52.829 22:46:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=14 00:07:52.829 22:46:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:52.829 22:46:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:52.829 22:46:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 00:07:52.829 22:46:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_14.conf 00:07:52.829 22:46:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:52.829 22:46:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:52.829 22:46:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 14 00:07:52.829 22:46:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4414 00:07:52.829 22:46:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 00:07:52.829 22:46:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4414' 00:07:52.829 22:46:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4414"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:52.829 22:46:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:52.829 22:46:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:52.829 22:46:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4414' -c /tmp/fuzz_json_14.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 -Z 14 00:07:52.829 [2024-07-24 22:46:51.033712] Starting SPDK v24.09-pre git sha1 f41dbc235 / DPDK 24.03.0 initialization... 00:07:52.829 [2024-07-24 22:46:51.033791] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid484369 ] 00:07:53.086 EAL: No free 2048 kB hugepages reported on node 1 00:07:53.086 [2024-07-24 22:46:51.219042] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:53.086 [2024-07-24 22:46:51.287620] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.343 [2024-07-24 22:46:51.346398] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:53.343 [2024-07-24 22:46:51.362632] tcp.c:1007:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4414 *** 00:07:53.343 INFO: Running with entropic power schedule (0xFF, 100). 00:07:53.343 INFO: Seed: 3209658314 00:07:53.343 INFO: Loaded 1 modules (358985 inline 8-bit counters): 358985 [0x29c620c, 0x2a1dc55), 00:07:53.343 INFO: Loaded 1 PC tables (358985 PCs): 358985 [0x2a1dc58,0x2f980e8), 00:07:53.343 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 00:07:53.343 INFO: A corpus is not provided, starting from an empty corpus 00:07:53.343 #2 INITED exec/s: 0 rss: 64Mb 00:07:53.343 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:53.343 This may also happen if the target rejected all inputs we tried so far 00:07:53.343 [2024-07-24 22:46:51.430057] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000b8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.343 [2024-07-24 22:46:51.430096] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:53.343 [2024-07-24 22:46:51.430199] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000b8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.343 [2024-07-24 22:46:51.430215] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:53.601 NEW_FUNC[1/699]: 0x497f50 in fuzz_admin_set_features_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:392 00:07:53.601 NEW_FUNC[2/699]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:53.601 #22 NEW cov: 11970 ft: 11984 corp: 2/19b lim: 35 exec/s: 0 rss: 71Mb L: 18/18 MS: 5 InsertByte-EraseBytes-CopyPart-ChangeByte-InsertRepeatedBytes- 00:07:53.601 [2024-07-24 22:46:51.600193] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.602 [2024-07-24 22:46:51.600240] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:53.602 NEW_FUNC[1/3]: 0x4b9410 in feat_write_atomicity /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:340 00:07:53.602 NEW_FUNC[2/3]: 0x13472b0 in nvmf_transport_poll_group_poll /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/transport.c:734 00:07:53.602 #30 NEW cov: 12113 ft: 13372 corp: 3/32b lim: 35 exec/s: 0 rss: 71Mb L: 13/18 MS: 3 ShuffleBytes-InsertByte-InsertRepeatedBytes- 00:07:53.602 [2024-07-24 22:46:51.660303] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:8000001a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.602 [2024-07-24 22:46:51.660331] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:53.602 #32 NEW cov: 12119 ft: 13635 corp: 4/43b lim: 35 exec/s: 0 rss: 71Mb L: 11/18 MS: 2 ChangeBit-InsertRepeatedBytes- 00:07:53.602 [2024-07-24 22:46:51.711239] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000f1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.602 [2024-07-24 22:46:51.711265] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:53.602 NEW_FUNC[1/1]: 0x11f6350 in nvmf_ctrlr_set_features_write_atomicity /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/ctrlr.c:1765 00:07:53.602 #33 NEW cov: 12227 ft: 13904 corp: 5/57b lim: 35 exec/s: 0 rss: 72Mb L: 14/18 MS: 1 CrossOver- 00:07:53.602 [2024-07-24 22:46:51.771035] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:8000001a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.602 [2024-07-24 22:46:51.771061] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:53.602 #34 NEW cov: 12227 ft: 13970 corp: 6/69b lim: 35 exec/s: 0 rss: 72Mb L: 12/18 MS: 1 InsertByte- 00:07:53.860 [2024-07-24 22:46:51.831716] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000b8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.860 [2024-07-24 22:46:51.831742] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:53.860 [2024-07-24 22:46:51.831835] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000b8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.860 [2024-07-24 22:46:51.831851] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:53.860 #35 NEW cov: 12227 ft: 14028 corp: 7/87b lim: 35 exec/s: 0 rss: 72Mb L: 18/18 MS: 1 ChangeBit- 00:07:53.860 [2024-07-24 22:46:51.891530] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:8000001a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.860 [2024-07-24 22:46:51.891556] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:53.860 #36 NEW cov: 12227 ft: 14135 corp: 8/99b lim: 35 exec/s: 0 rss: 72Mb L: 12/18 MS: 1 ChangeBit- 00:07:53.860 [2024-07-24 22:46:51.962480] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000b8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.860 [2024-07-24 22:46:51.962505] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:53.860 [2024-07-24 22:46:51.962601] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000b8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.860 [2024-07-24 22:46:51.962616] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:53.860 [2024-07-24 22:46:51.962707] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.860 [2024-07-24 22:46:51.962722] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:53.860 #37 NEW cov: 12227 ft: 14368 corp: 9/122b lim: 35 exec/s: 0 rss: 72Mb L: 23/23 MS: 1 InsertRepeatedBytes- 00:07:53.860 [2024-07-24 22:46:52.022283] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000b8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.860 [2024-07-24 22:46:52.022314] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:53.860 [2024-07-24 22:46:52.022406] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000b8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.860 [2024-07-24 22:46:52.022422] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:53.860 #38 NEW cov: 12227 ft: 14408 corp: 10/140b lim: 35 exec/s: 0 rss: 72Mb L: 18/23 MS: 1 ChangeBinInt- 00:07:54.118 [2024-07-24 22:46:52.073200] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.118 [2024-07-24 22:46:52.073224] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:54.118 [2024-07-24 22:46:52.073330] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.118 [2024-07-24 22:46:52.073343] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:54.118 [2024-07-24 22:46:52.073434] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.118 [2024-07-24 22:46:52.073446] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:54.118 #39 NEW cov: 12234 ft: 14485 corp: 11/161b lim: 35 exec/s: 0 rss: 72Mb L: 21/23 MS: 1 InsertRepeatedBytes- 00:07:54.118 [2024-07-24 22:46:52.122512] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:8000001a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.118 [2024-07-24 22:46:52.122540] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:54.118 #40 NEW cov: 12234 ft: 14520 corp: 12/173b lim: 35 exec/s: 0 rss: 72Mb L: 12/23 MS: 1 CopyPart- 00:07:54.118 [2024-07-24 22:46:52.173919] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000031 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.118 [2024-07-24 22:46:52.173943] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:54.118 [2024-07-24 22:46:52.174046] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000031 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.118 [2024-07-24 22:46:52.174058] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:54.118 [2024-07-24 22:46:52.174158] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000031 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.118 [2024-07-24 22:46:52.174170] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:54.118 #41 NEW cov: 12234 ft: 14663 corp: 13/206b lim: 35 exec/s: 0 rss: 72Mb L: 33/33 MS: 1 InsertRepeatedBytes- 00:07:54.118 [2024-07-24 22:46:52.223749] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000b8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.118 [2024-07-24 22:46:52.223775] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:54.118 [2024-07-24 22:46:52.223875] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000b8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.118 [2024-07-24 22:46:52.223891] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:54.118 [2024-07-24 22:46:52.223993] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.118 [2024-07-24 22:46:52.224010] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:54.118 #42 NEW cov: 12234 ft: 14742 corp: 14/232b lim: 35 exec/s: 0 rss: 72Mb L: 26/33 MS: 1 InsertRepeatedBytes- 00:07:54.118 [2024-07-24 22:46:52.294014] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000b8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.118 [2024-07-24 22:46:52.294038] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:54.118 [2024-07-24 22:46:52.294085] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000b8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.118 [2024-07-24 22:46:52.294098] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:54.118 [2024-07-24 22:46:52.294192] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.119 [2024-07-24 22:46:52.294206] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:54.119 NEW_FUNC[1/1]: 0x1a887d0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:07:54.119 #48 NEW cov: 12257 ft: 14784 corp: 15/255b lim: 35 exec/s: 0 rss: 72Mb L: 23/33 MS: 1 ChangeBinInt- 00:07:54.377 [2024-07-24 22:46:52.344676] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000031 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.377 [2024-07-24 22:46:52.344701] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:54.377 [2024-07-24 22:46:52.344796] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000031 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.377 [2024-07-24 22:46:52.344811] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:54.377 [2024-07-24 22:46:52.344903] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000031 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.377 [2024-07-24 22:46:52.344915] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:54.377 #49 NEW cov: 12257 ft: 14879 corp: 16/288b lim: 35 exec/s: 0 rss: 72Mb L: 33/33 MS: 1 CrossOver- 00:07:54.377 [2024-07-24 22:46:52.414427] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000b8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.377 [2024-07-24 22:46:52.414455] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:54.377 [2024-07-24 22:46:52.414560] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000b8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.377 [2024-07-24 22:46:52.414575] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:54.377 [2024-07-24 22:46:52.414669] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.377 [2024-07-24 22:46:52.414683] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:54.377 #50 NEW cov: 12257 ft: 14892 corp: 17/312b lim: 35 exec/s: 50 rss: 72Mb L: 24/33 MS: 1 InsertByte- 00:07:54.377 [2024-07-24 22:46:52.464265] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000b8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.377 [2024-07-24 22:46:52.464291] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:54.377 [2024-07-24 22:46:52.464384] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000b8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.377 [2024-07-24 22:46:52.464403] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:54.377 #51 NEW cov: 12257 ft: 14898 corp: 18/331b lim: 35 exec/s: 51 rss: 72Mb L: 19/33 MS: 1 InsertByte- 00:07:54.377 [2024-07-24 22:46:52.514136] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:8000001a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.377 [2024-07-24 22:46:52.514164] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:54.377 #52 NEW cov: 12257 ft: 14915 corp: 19/343b lim: 35 exec/s: 52 rss: 72Mb L: 12/33 MS: 1 ShuffleBytes- 00:07:54.377 [2024-07-24 22:46:52.565546] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000b8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.377 [2024-07-24 22:46:52.565575] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:54.377 [2024-07-24 22:46:52.565665] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000b8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.377 [2024-07-24 22:46:52.565682] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:54.377 [2024-07-24 22:46:52.565772] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.377 [2024-07-24 22:46:52.565790] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:54.377 [2024-07-24 22:46:52.565880] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000b8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.377 [2024-07-24 22:46:52.565899] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:54.635 #53 NEW cov: 12257 ft: 15059 corp: 20/372b lim: 35 exec/s: 53 rss: 72Mb L: 29/33 MS: 1 InsertRepeatedBytes- 00:07:54.635 [2024-07-24 22:46:52.635199] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000b8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.635 [2024-07-24 22:46:52.635227] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:54.635 [2024-07-24 22:46:52.635323] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000b8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.635 [2024-07-24 22:46:52.635338] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:54.635 #54 NEW cov: 12257 ft: 15196 corp: 21/391b lim: 35 exec/s: 54 rss: 72Mb L: 19/33 MS: 1 ChangeBit- 00:07:54.635 [2024-07-24 22:46:52.705201] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:8000001a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.635 [2024-07-24 22:46:52.705231] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:54.635 #55 NEW cov: 12257 ft: 15224 corp: 22/404b lim: 35 exec/s: 55 rss: 72Mb L: 13/33 MS: 1 InsertByte- 00:07:54.635 [2024-07-24 22:46:52.775844] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000b8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.635 [2024-07-24 22:46:52.775870] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:54.635 [2024-07-24 22:46:52.775965] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000b8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.635 [2024-07-24 22:46:52.775984] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:54.635 #56 NEW cov: 12257 ft: 15232 corp: 23/422b lim: 35 exec/s: 56 rss: 72Mb L: 18/33 MS: 1 ShuffleBytes- 00:07:54.635 [2024-07-24 22:46:52.825750] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:8000001a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.635 [2024-07-24 22:46:52.825779] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:54.894 #57 NEW cov: 12257 ft: 15234 corp: 24/434b lim: 35 exec/s: 57 rss: 72Mb L: 12/33 MS: 1 ChangeBit- 00:07:54.894 [2024-07-24 22:46:52.877041] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000031 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.894 [2024-07-24 22:46:52.877065] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:54.894 [2024-07-24 22:46:52.877165] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000031 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.894 [2024-07-24 22:46:52.877179] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:54.894 [2024-07-24 22:46:52.877271] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000031 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.894 [2024-07-24 22:46:52.877284] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:54.894 #58 NEW cov: 12257 ft: 15284 corp: 25/467b lim: 35 exec/s: 58 rss: 72Mb L: 33/33 MS: 1 ChangeBit- 00:07:54.894 [2024-07-24 22:46:52.927196] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:0000001a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.894 [2024-07-24 22:46:52.927221] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:54.894 [2024-07-24 22:46:52.927321] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:0000002a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.894 [2024-07-24 22:46:52.927334] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:54.894 [2024-07-24 22:46:52.927422] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:0000002a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.894 [2024-07-24 22:46:52.927435] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:54.894 [2024-07-24 22:46:52.927530] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:8000002a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.894 [2024-07-24 22:46:52.927544] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:54.894 #59 NEW cov: 12257 ft: 15324 corp: 26/500b lim: 35 exec/s: 59 rss: 72Mb L: 33/33 MS: 1 InsertRepeatedBytes- 00:07:54.894 [2024-07-24 22:46:52.997539] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:0000001a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.894 [2024-07-24 22:46:52.997563] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:54.894 [2024-07-24 22:46:52.997658] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:0000002a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.894 [2024-07-24 22:46:52.997673] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:54.894 [2024-07-24 22:46:52.997772] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:0000002a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.894 [2024-07-24 22:46:52.997787] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:54.894 [2024-07-24 22:46:52.997882] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:8000002a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.894 [2024-07-24 22:46:52.997899] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:54.894 #60 NEW cov: 12257 ft: 15338 corp: 27/533b lim: 35 exec/s: 60 rss: 72Mb L: 33/33 MS: 1 ChangeByte- 00:07:54.894 [2024-07-24 22:46:53.067778] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:0000001a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.894 [2024-07-24 22:46:53.067802] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:54.894 [2024-07-24 22:46:53.067890] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:0000002a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.894 [2024-07-24 22:46:53.067904] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:54.894 [2024-07-24 22:46:53.068006] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:0000002a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.894 [2024-07-24 22:46:53.068020] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:54.894 [2024-07-24 22:46:53.068109] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:8000002a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.894 [2024-07-24 22:46:53.068124] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:55.152 #61 NEW cov: 12257 ft: 15376 corp: 28/566b lim: 35 exec/s: 61 rss: 73Mb L: 33/33 MS: 1 ChangeBit- 00:07:55.152 [2024-07-24 22:46:53.136907] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:8000001a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.152 [2024-07-24 22:46:53.136933] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:55.152 #62 NEW cov: 12257 ft: 15442 corp: 29/578b lim: 35 exec/s: 62 rss: 73Mb L: 12/33 MS: 1 ChangeByte- 00:07:55.152 [2024-07-24 22:46:53.197836] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000b8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.152 [2024-07-24 22:46:53.197860] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:55.152 [2024-07-24 22:46:53.197957] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000b8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.152 [2024-07-24 22:46:53.197975] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:55.152 [2024-07-24 22:46:53.198067] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:8000001e SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.152 [2024-07-24 22:46:53.198085] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:55.152 #63 NEW cov: 12257 ft: 15451 corp: 30/605b lim: 35 exec/s: 63 rss: 73Mb L: 27/33 MS: 1 CMP- DE: "\377\377\377\036"- 00:07:55.152 [2024-07-24 22:46:53.247189] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.152 [2024-07-24 22:46:53.247215] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:55.152 #64 NEW cov: 12257 ft: 15468 corp: 31/618b lim: 35 exec/s: 64 rss: 73Mb L: 13/33 MS: 1 ChangeBit- 00:07:55.152 [2024-07-24 22:46:53.307934] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:8000001a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.152 [2024-07-24 22:46:53.307960] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:55.152 [2024-07-24 22:46:53.308056] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:8000009a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.152 [2024-07-24 22:46:53.308079] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:55.152 #65 NEW cov: 12257 ft: 15488 corp: 32/632b lim: 35 exec/s: 65 rss: 73Mb L: 14/33 MS: 1 CopyPart- 00:07:55.409 [2024-07-24 22:46:53.358525] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:0000001a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.409 [2024-07-24 22:46:53.358549] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:55.409 [2024-07-24 22:46:53.358643] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:0000002a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.410 [2024-07-24 22:46:53.358657] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:55.410 [2024-07-24 22:46:53.358752] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:0000002a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.410 [2024-07-24 22:46:53.358764] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:55.410 #66 NEW cov: 12257 ft: 15528 corp: 33/657b lim: 35 exec/s: 66 rss: 73Mb L: 25/33 MS: 1 EraseBytes- 00:07:55.410 [2024-07-24 22:46:53.409188] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:0000001a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.410 [2024-07-24 22:46:53.409210] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:55.410 [2024-07-24 22:46:53.409302] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:0000002a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.410 [2024-07-24 22:46:53.409315] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:55.410 [2024-07-24 22:46:53.409402] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:0000002a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.410 [2024-07-24 22:46:53.409416] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:55.410 [2024-07-24 22:46:53.409516] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:0000002a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.410 [2024-07-24 22:46:53.409530] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:55.410 #67 NEW cov: 12257 ft: 15546 corp: 34/691b lim: 35 exec/s: 33 rss: 73Mb L: 34/34 MS: 1 InsertByte- 00:07:55.410 #67 DONE cov: 12257 ft: 15546 corp: 34/691b lim: 35 exec/s: 33 rss: 73Mb 00:07:55.410 ###### Recommended dictionary. ###### 00:07:55.410 "\377\377\377\036" # Uses: 0 00:07:55.410 ###### End of recommended dictionary. ###### 00:07:55.410 Done 67 runs in 2 second(s) 00:07:55.410 22:46:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_14.conf /var/tmp/suppress_nvmf_fuzz 00:07:55.410 22:46:53 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:55.410 22:46:53 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:55.410 22:46:53 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 15 1 0x1 00:07:55.410 22:46:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=15 00:07:55.410 22:46:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:55.410 22:46:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:55.410 22:46:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 00:07:55.410 22:46:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_15.conf 00:07:55.410 22:46:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:55.410 22:46:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:55.410 22:46:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 15 00:07:55.410 22:46:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4415 00:07:55.410 22:46:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 00:07:55.410 22:46:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4415' 00:07:55.410 22:46:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4415"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:55.410 22:46:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:55.410 22:46:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:55.410 22:46:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4415' -c /tmp/fuzz_json_15.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 -Z 15 00:07:55.410 [2024-07-24 22:46:53.590194] Starting SPDK v24.09-pre git sha1 f41dbc235 / DPDK 24.03.0 initialization... 00:07:55.410 [2024-07-24 22:46:53.590257] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid484797 ] 00:07:55.667 EAL: No free 2048 kB hugepages reported on node 1 00:07:55.667 [2024-07-24 22:46:53.768551] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.667 [2024-07-24 22:46:53.834231] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.925 [2024-07-24 22:46:53.893259] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:55.925 [2024-07-24 22:46:53.909494] tcp.c:1007:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4415 *** 00:07:55.925 INFO: Running with entropic power schedule (0xFF, 100). 00:07:55.925 INFO: Seed: 1461687522 00:07:55.925 INFO: Loaded 1 modules (358985 inline 8-bit counters): 358985 [0x29c620c, 0x2a1dc55), 00:07:55.925 INFO: Loaded 1 PC tables (358985 PCs): 358985 [0x2a1dc58,0x2f980e8), 00:07:55.925 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 00:07:55.925 INFO: A corpus is not provided, starting from an empty corpus 00:07:55.925 #2 INITED exec/s: 0 rss: 64Mb 00:07:55.925 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:55.925 This may also happen if the target rejected all inputs we tried so far 00:07:55.925 [2024-07-24 22:46:53.965114] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.925 [2024-07-24 22:46:53.965139] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:55.925 [2024-07-24 22:46:53.965197] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.925 [2024-07-24 22:46:53.965208] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:55.925 [2024-07-24 22:46:53.965266] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.925 [2024-07-24 22:46:53.965278] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:55.925 NEW_FUNC[1/700]: 0x499490 in fuzz_admin_get_features_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:460 00:07:55.925 NEW_FUNC[2/700]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:55.925 #8 NEW cov: 11978 ft: 11977 corp: 2/27b lim: 35 exec/s: 0 rss: 71Mb L: 26/26 MS: 1 InsertRepeatedBytes- 00:07:55.925 [2024-07-24 22:46:54.115493] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.925 [2024-07-24 22:46:54.115535] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:55.925 [2024-07-24 22:46:54.115608] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.925 [2024-07-24 22:46:54.115624] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:56.183 #9 NEW cov: 12091 ft: 12716 corp: 3/42b lim: 35 exec/s: 0 rss: 71Mb L: 15/26 MS: 1 EraseBytes- 00:07:56.183 [2024-07-24 22:46:54.175541] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.183 [2024-07-24 22:46:54.175567] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:56.183 [2024-07-24 22:46:54.175623] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.183 [2024-07-24 22:46:54.175634] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:56.183 [2024-07-24 22:46:54.175690] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.183 [2024-07-24 22:46:54.175701] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:56.183 #12 NEW cov: 12097 ft: 12941 corp: 4/64b lim: 35 exec/s: 0 rss: 71Mb L: 22/26 MS: 3 CopyPart-ChangeBit-InsertRepeatedBytes- 00:07:56.183 [2024-07-24 22:46:54.215619] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000000e7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.183 [2024-07-24 22:46:54.215641] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:56.183 [2024-07-24 22:46:54.215714] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.183 [2024-07-24 22:46:54.215726] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:56.183 [2024-07-24 22:46:54.215784] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.183 [2024-07-24 22:46:54.215794] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:56.183 #16 NEW cov: 12182 ft: 13178 corp: 5/87b lim: 35 exec/s: 0 rss: 71Mb L: 23/26 MS: 4 ChangeByte-ShuffleBytes-ShuffleBytes-CrossOver- 00:07:56.183 [2024-07-24 22:46:54.255651] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.183 [2024-07-24 22:46:54.255672] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:56.183 [2024-07-24 22:46:54.255748] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000000ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.183 [2024-07-24 22:46:54.255759] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:56.183 #17 NEW cov: 12182 ft: 13336 corp: 6/103b lim: 35 exec/s: 0 rss: 72Mb L: 16/26 MS: 1 InsertByte- 00:07:56.183 [2024-07-24 22:46:54.306082] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.183 [2024-07-24 22:46:54.306103] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:56.183 [2024-07-24 22:46:54.306178] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.183 [2024-07-24 22:46:54.306190] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:56.183 [2024-07-24 22:46:54.306250] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.183 [2024-07-24 22:46:54.306261] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:56.183 NEW_FUNC[1/1]: 0x4b9410 in feat_write_atomicity /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:340 00:07:56.183 #18 NEW cov: 12196 ft: 13753 corp: 7/133b lim: 35 exec/s: 0 rss: 72Mb L: 30/30 MS: 1 InsertRepeatedBytes- 00:07:56.183 #20 NEW cov: 12196 ft: 14011 corp: 8/145b lim: 35 exec/s: 0 rss: 72Mb L: 12/30 MS: 2 CopyPart-InsertRepeatedBytes- 00:07:56.442 #21 NEW cov: 12196 ft: 14042 corp: 9/157b lim: 35 exec/s: 0 rss: 72Mb L: 12/30 MS: 1 ChangeBinInt- 00:07:56.442 [2024-07-24 22:46:54.436231] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000000e7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.442 [2024-07-24 22:46:54.436253] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:56.442 [2024-07-24 22:46:54.436327] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.442 [2024-07-24 22:46:54.436339] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:56.442 [2024-07-24 22:46:54.436395] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.442 [2024-07-24 22:46:54.436406] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:56.442 #22 NEW cov: 12196 ft: 14116 corp: 10/181b lim: 35 exec/s: 0 rss: 72Mb L: 24/30 MS: 1 InsertByte- 00:07:56.442 [2024-07-24 22:46:54.486287] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.442 [2024-07-24 22:46:54.486310] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:56.442 [2024-07-24 22:46:54.486367] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.442 [2024-07-24 22:46:54.486378] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:56.442 #23 NEW cov: 12196 ft: 14147 corp: 11/196b lim: 35 exec/s: 0 rss: 72Mb L: 15/30 MS: 1 ShuffleBytes- 00:07:56.442 [2024-07-24 22:46:54.526724] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.442 [2024-07-24 22:46:54.526745] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:56.442 [2024-07-24 22:46:54.526821] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.442 [2024-07-24 22:46:54.526832] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:56.442 [2024-07-24 22:46:54.526893] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.442 [2024-07-24 22:46:54.526903] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:56.442 #24 NEW cov: 12196 ft: 14172 corp: 12/226b lim: 35 exec/s: 0 rss: 72Mb L: 30/30 MS: 1 ShuffleBytes- 00:07:56.442 [2024-07-24 22:46:54.576381] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.442 [2024-07-24 22:46:54.576404] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:56.442 #25 NEW cov: 12196 ft: 14313 corp: 13/239b lim: 35 exec/s: 0 rss: 72Mb L: 13/30 MS: 1 EraseBytes- 00:07:56.442 [2024-07-24 22:46:54.616780] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000000e7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.442 [2024-07-24 22:46:54.616803] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:56.442 [2024-07-24 22:46:54.616857] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.442 [2024-07-24 22:46:54.616868] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:56.442 [2024-07-24 22:46:54.616924] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.442 [2024-07-24 22:46:54.616935] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:56.442 #26 NEW cov: 12196 ft: 14343 corp: 14/262b lim: 35 exec/s: 0 rss: 72Mb L: 23/30 MS: 1 CopyPart- 00:07:56.700 [2024-07-24 22:46:54.657060] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.700 [2024-07-24 22:46:54.657087] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:56.700 [2024-07-24 22:46:54.657145] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.700 [2024-07-24 22:46:54.657155] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:56.700 [2024-07-24 22:46:54.657213] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.700 [2024-07-24 22:46:54.657224] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:56.700 #27 NEW cov: 12196 ft: 14358 corp: 15/292b lim: 35 exec/s: 0 rss: 72Mb L: 30/30 MS: 1 ChangeBit- 00:07:56.700 [2024-07-24 22:46:54.696983] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000000e7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.700 [2024-07-24 22:46:54.697005] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:56.700 [2024-07-24 22:46:54.697080] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000000f8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.700 [2024-07-24 22:46:54.697092] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:56.700 [2024-07-24 22:46:54.697150] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.700 [2024-07-24 22:46:54.697161] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:56.700 #28 NEW cov: 12196 ft: 14418 corp: 16/316b lim: 35 exec/s: 0 rss: 72Mb L: 24/30 MS: 1 ChangeBinInt- 00:07:56.700 [2024-07-24 22:46:54.747332] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.700 [2024-07-24 22:46:54.747354] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:56.700 [2024-07-24 22:46:54.747428] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.700 [2024-07-24 22:46:54.747439] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:56.700 [2024-07-24 22:46:54.747498] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.700 [2024-07-24 22:46:54.747509] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:56.700 #29 NEW cov: 12196 ft: 14472 corp: 17/346b lim: 35 exec/s: 0 rss: 72Mb L: 30/30 MS: 1 ChangeByte- 00:07:56.700 [2024-07-24 22:46:54.797112] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.700 [2024-07-24 22:46:54.797135] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:56.700 [2024-07-24 22:46:54.797192] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.700 [2024-07-24 22:46:54.797203] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:56.700 #30 NEW cov: 12196 ft: 14493 corp: 18/361b lim: 35 exec/s: 0 rss: 72Mb L: 15/30 MS: 1 ChangeByte- 00:07:56.700 [2024-07-24 22:46:54.847613] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.700 [2024-07-24 22:46:54.847635] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:56.700 [2024-07-24 22:46:54.847709] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.700 [2024-07-24 22:46:54.847721] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:56.700 [2024-07-24 22:46:54.847780] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.700 [2024-07-24 22:46:54.847790] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:56.700 NEW_FUNC[1/1]: 0x1a887d0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:07:56.700 #31 NEW cov: 12219 ft: 14527 corp: 19/391b lim: 35 exec/s: 0 rss: 72Mb L: 30/30 MS: 1 ChangeBit- 00:07:56.700 [2024-07-24 22:46:54.887528] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000000e7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.700 [2024-07-24 22:46:54.887550] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:56.700 [2024-07-24 22:46:54.887606] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.700 [2024-07-24 22:46:54.887617] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:56.700 [2024-07-24 22:46:54.887674] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.700 [2024-07-24 22:46:54.887684] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:56.958 #32 NEW cov: 12219 ft: 14534 corp: 20/415b lim: 35 exec/s: 0 rss: 72Mb L: 24/30 MS: 1 InsertByte- 00:07:56.958 [2024-07-24 22:46:54.927651] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.958 [2024-07-24 22:46:54.927672] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:56.958 [2024-07-24 22:46:54.927746] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.958 [2024-07-24 22:46:54.927758] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:56.958 [2024-07-24 22:46:54.927810] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.958 [2024-07-24 22:46:54.927821] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:56.958 #33 NEW cov: 12219 ft: 14550 corp: 21/441b lim: 35 exec/s: 33 rss: 72Mb L: 26/30 MS: 1 CrossOver- 00:07:56.958 [2024-07-24 22:46:54.967935] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.958 [2024-07-24 22:46:54.967956] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:56.958 [2024-07-24 22:46:54.968027] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.958 [2024-07-24 22:46:54.968039] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:56.958 [2024-07-24 22:46:54.968099] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.958 [2024-07-24 22:46:54.968110] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:56.958 #34 NEW cov: 12219 ft: 14561 corp: 22/471b lim: 35 exec/s: 34 rss: 72Mb L: 30/30 MS: 1 ChangeBinInt- 00:07:56.959 [2024-07-24 22:46:55.017874] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000000e7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.959 [2024-07-24 22:46:55.017896] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:56.959 [2024-07-24 22:46:55.017956] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.959 [2024-07-24 22:46:55.017967] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:56.959 [2024-07-24 22:46:55.018024] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.959 [2024-07-24 22:46:55.018034] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:56.959 #35 NEW cov: 12219 ft: 14562 corp: 23/497b lim: 35 exec/s: 35 rss: 72Mb L: 26/30 MS: 1 InsertRepeatedBytes- 00:07:56.959 [2024-07-24 22:46:55.058015] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000000e7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.959 [2024-07-24 22:46:55.058037] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:56.959 [2024-07-24 22:46:55.058115] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.959 [2024-07-24 22:46:55.058128] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:56.959 [2024-07-24 22:46:55.058188] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.959 [2024-07-24 22:46:55.058198] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:56.959 #36 NEW cov: 12219 ft: 14576 corp: 24/524b lim: 35 exec/s: 36 rss: 72Mb L: 27/30 MS: 1 InsertByte- 00:07:56.959 [2024-07-24 22:46:55.108169] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000000e7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.959 [2024-07-24 22:46:55.108190] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:56.959 [2024-07-24 22:46:55.108264] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.959 [2024-07-24 22:46:55.108275] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:56.959 [2024-07-24 22:46:55.108332] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.959 [2024-07-24 22:46:55.108346] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:56.959 #37 NEW cov: 12219 ft: 14592 corp: 25/550b lim: 35 exec/s: 37 rss: 72Mb L: 26/30 MS: 1 ChangeBit- 00:07:56.959 [2024-07-24 22:46:55.148273] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000000e7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.959 [2024-07-24 22:46:55.148295] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:56.959 [2024-07-24 22:46:55.148367] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.959 [2024-07-24 22:46:55.148378] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:56.959 [2024-07-24 22:46:55.148436] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.959 [2024-07-24 22:46:55.148447] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:57.217 #38 NEW cov: 12219 ft: 14610 corp: 26/575b lim: 35 exec/s: 38 rss: 72Mb L: 25/30 MS: 1 InsertByte- 00:07:57.217 [2024-07-24 22:46:55.198617] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.217 [2024-07-24 22:46:55.198640] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:57.217 [2024-07-24 22:46:55.198698] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.217 [2024-07-24 22:46:55.198710] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:57.217 [2024-07-24 22:46:55.198766] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.217 [2024-07-24 22:46:55.198778] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:57.217 #44 NEW cov: 12219 ft: 14636 corp: 27/605b lim: 35 exec/s: 44 rss: 72Mb L: 30/30 MS: 1 ChangeBit- 00:07:57.217 [2024-07-24 22:46:55.238750] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000244 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.217 [2024-07-24 22:46:55.238773] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:57.217 [2024-07-24 22:46:55.238831] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000244 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.217 [2024-07-24 22:46:55.238841] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:57.217 [2024-07-24 22:46:55.238897] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000244 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.217 [2024-07-24 22:46:55.238909] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:57.217 #45 NEW cov: 12219 ft: 14660 corp: 28/634b lim: 35 exec/s: 45 rss: 72Mb L: 29/30 MS: 1 InsertRepeatedBytes- 00:07:57.217 [2024-07-24 22:46:55.288540] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.217 [2024-07-24 22:46:55.288563] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:57.217 [2024-07-24 22:46:55.288621] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000000ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.217 [2024-07-24 22:46:55.288632] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:57.217 #46 NEW cov: 12219 ft: 14741 corp: 29/650b lim: 35 exec/s: 46 rss: 73Mb L: 16/30 MS: 1 ChangeBit- 00:07:57.217 [2024-07-24 22:46:55.338979] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.217 [2024-07-24 22:46:55.339001] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:57.217 [2024-07-24 22:46:55.339079] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.217 [2024-07-24 22:46:55.339091] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:57.217 [2024-07-24 22:46:55.339148] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.217 [2024-07-24 22:46:55.339160] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:57.217 [2024-07-24 22:46:55.339216] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.217 [2024-07-24 22:46:55.339227] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:57.217 #47 NEW cov: 12219 ft: 14885 corp: 30/681b lim: 35 exec/s: 47 rss: 73Mb L: 31/31 MS: 1 CopyPart- 00:07:57.217 [2024-07-24 22:46:55.388854] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.217 [2024-07-24 22:46:55.388878] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:57.217 [2024-07-24 22:46:55.388935] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.217 [2024-07-24 22:46:55.388946] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:57.217 #48 NEW cov: 12219 ft: 14901 corp: 31/696b lim: 35 exec/s: 48 rss: 73Mb L: 15/31 MS: 1 ChangeByte- 00:07:57.476 [2024-07-24 22:46:55.428955] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.476 [2024-07-24 22:46:55.428978] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:57.476 [2024-07-24 22:46:55.429038] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.476 [2024-07-24 22:46:55.429048] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:57.476 #49 NEW cov: 12219 ft: 14906 corp: 32/712b lim: 35 exec/s: 49 rss: 73Mb L: 16/31 MS: 1 InsertByte- 00:07:57.476 [2024-07-24 22:46:55.469208] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000000e7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.476 [2024-07-24 22:46:55.469230] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:57.476 [2024-07-24 22:46:55.469304] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.476 [2024-07-24 22:46:55.469315] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:57.476 [2024-07-24 22:46:55.469386] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.476 [2024-07-24 22:46:55.469397] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:57.476 #50 NEW cov: 12219 ft: 14982 corp: 33/738b lim: 35 exec/s: 50 rss: 73Mb L: 26/31 MS: 1 CMP- DE: "\007\000"- 00:07:57.476 [2024-07-24 22:46:55.509520] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.476 [2024-07-24 22:46:55.509543] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:57.476 [2024-07-24 22:46:55.509618] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.476 [2024-07-24 22:46:55.509629] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:57.476 [2024-07-24 22:46:55.509687] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.476 [2024-07-24 22:46:55.509698] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:57.476 #51 NEW cov: 12219 ft: 14991 corp: 34/768b lim: 35 exec/s: 51 rss: 73Mb L: 30/31 MS: 1 ShuffleBytes- 00:07:57.476 [2024-07-24 22:46:55.549648] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000244 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.476 [2024-07-24 22:46:55.549670] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:57.476 [2024-07-24 22:46:55.549744] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000244 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.476 [2024-07-24 22:46:55.549755] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:57.476 [2024-07-24 22:46:55.549811] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000244 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.476 [2024-07-24 22:46:55.549822] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:57.476 #52 NEW cov: 12219 ft: 15000 corp: 35/797b lim: 35 exec/s: 52 rss: 73Mb L: 29/31 MS: 1 ChangeBit- 00:07:57.476 [2024-07-24 22:46:55.599595] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000000e7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.476 [2024-07-24 22:46:55.599617] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:57.476 [2024-07-24 22:46:55.599691] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.476 [2024-07-24 22:46:55.599702] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:57.476 [2024-07-24 22:46:55.599761] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.476 [2024-07-24 22:46:55.599772] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:57.476 #53 NEW cov: 12219 ft: 15010 corp: 36/823b lim: 35 exec/s: 53 rss: 73Mb L: 26/31 MS: 1 InsertByte- 00:07:57.476 #54 NEW cov: 12219 ft: 15055 corp: 37/835b lim: 35 exec/s: 54 rss: 73Mb L: 12/31 MS: 1 ChangeByte- 00:07:57.476 [2024-07-24 22:46:55.679892] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000244 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.476 [2024-07-24 22:46:55.679919] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:57.476 [2024-07-24 22:46:55.679977] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000244 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.476 [2024-07-24 22:46:55.679988] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:57.734 #55 NEW cov: 12219 ft: 15067 corp: 38/858b lim: 35 exec/s: 55 rss: 73Mb L: 23/31 MS: 1 EraseBytes- 00:07:57.734 [2024-07-24 22:46:55.729999] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000000e7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.734 [2024-07-24 22:46:55.730024] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:57.734 [2024-07-24 22:46:55.730084] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.734 [2024-07-24 22:46:55.730095] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:57.734 [2024-07-24 22:46:55.730151] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.734 [2024-07-24 22:46:55.730161] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:57.734 #56 NEW cov: 12219 ft: 15077 corp: 39/882b lim: 35 exec/s: 56 rss: 73Mb L: 24/31 MS: 1 ShuffleBytes- 00:07:57.734 [2024-07-24 22:46:55.769812] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.734 [2024-07-24 22:46:55.769834] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:57.734 #57 NEW cov: 12219 ft: 15100 corp: 40/892b lim: 35 exec/s: 57 rss: 73Mb L: 10/31 MS: 1 EraseBytes- 00:07:57.734 [2024-07-24 22:46:55.809895] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.734 [2024-07-24 22:46:55.809918] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:57.734 #58 NEW cov: 12219 ft: 15103 corp: 41/905b lim: 35 exec/s: 58 rss: 73Mb L: 13/31 MS: 1 ChangeByte- 00:07:57.734 [2024-07-24 22:46:55.860036] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.734 [2024-07-24 22:46:55.860057] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:57.734 #59 NEW cov: 12219 ft: 15111 corp: 42/915b lim: 35 exec/s: 59 rss: 74Mb L: 10/31 MS: 1 ChangeBinInt- 00:07:57.734 [2024-07-24 22:46:55.910484] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000000e7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.734 [2024-07-24 22:46:55.910506] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:57.734 [2024-07-24 22:46:55.910561] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.734 [2024-07-24 22:46:55.910572] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:57.734 [2024-07-24 22:46:55.910625] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.734 [2024-07-24 22:46:55.910636] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:57.734 #60 NEW cov: 12219 ft: 15133 corp: 43/942b lim: 35 exec/s: 60 rss: 74Mb L: 27/31 MS: 1 PersAutoDict- DE: "\007\000"- 00:07:57.992 [2024-07-24 22:46:55.950826] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.992 [2024-07-24 22:46:55.950848] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:57.992 [2024-07-24 22:46:55.950901] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.992 [2024-07-24 22:46:55.950912] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:57.992 [2024-07-24 22:46:55.950966] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.992 [2024-07-24 22:46:55.950980] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:57.992 #61 NEW cov: 12219 ft: 15139 corp: 44/976b lim: 35 exec/s: 30 rss: 74Mb L: 34/34 MS: 1 InsertRepeatedBytes- 00:07:57.992 #61 DONE cov: 12219 ft: 15139 corp: 44/976b lim: 35 exec/s: 30 rss: 74Mb 00:07:57.992 ###### Recommended dictionary. ###### 00:07:57.992 "\007\000" # Uses: 1 00:07:57.992 ###### End of recommended dictionary. ###### 00:07:57.992 Done 61 runs in 2 second(s) 00:07:57.992 22:46:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_15.conf /var/tmp/suppress_nvmf_fuzz 00:07:57.992 22:46:56 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:57.992 22:46:56 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:57.992 22:46:56 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 16 1 0x1 00:07:57.992 22:46:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=16 00:07:57.992 22:46:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:57.992 22:46:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:57.992 22:46:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 00:07:57.992 22:46:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_16.conf 00:07:57.992 22:46:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:57.992 22:46:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:57.992 22:46:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 16 00:07:57.992 22:46:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4416 00:07:57.993 22:46:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 00:07:57.993 22:46:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4416' 00:07:57.993 22:46:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4416"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:57.993 22:46:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:57.993 22:46:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:57.993 22:46:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4416' -c /tmp/fuzz_json_16.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 -Z 16 00:07:57.993 [2024-07-24 22:46:56.142866] Starting SPDK v24.09-pre git sha1 f41dbc235 / DPDK 24.03.0 initialization... 00:07:57.993 [2024-07-24 22:46:56.142947] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid485232 ] 00:07:57.993 EAL: No free 2048 kB hugepages reported on node 1 00:07:58.251 [2024-07-24 22:46:56.328442] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:58.251 [2024-07-24 22:46:56.393261] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.251 [2024-07-24 22:46:56.451651] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:58.510 [2024-07-24 22:46:56.467894] tcp.c:1007:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4416 *** 00:07:58.510 INFO: Running with entropic power schedule (0xFF, 100). 00:07:58.510 INFO: Seed: 4019692703 00:07:58.510 INFO: Loaded 1 modules (358985 inline 8-bit counters): 358985 [0x29c620c, 0x2a1dc55), 00:07:58.510 INFO: Loaded 1 PC tables (358985 PCs): 358985 [0x2a1dc58,0x2f980e8), 00:07:58.510 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 00:07:58.510 INFO: A corpus is not provided, starting from an empty corpus 00:07:58.510 #2 INITED exec/s: 0 rss: 64Mb 00:07:58.510 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:58.510 This may also happen if the target rejected all inputs we tried so far 00:07:58.510 [2024-07-24 22:46:56.523232] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18157383378766920699 len:64508 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.510 [2024-07-24 22:46:56.523261] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:58.510 [2024-07-24 22:46:56.523314] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18157383382357244923 len:64508 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.510 [2024-07-24 22:46:56.523337] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:58.510 NEW_FUNC[1/701]: 0x49a940 in fuzz_nvm_read_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:519 00:07:58.510 NEW_FUNC[2/701]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:58.510 #5 NEW cov: 12080 ft: 12039 corp: 2/55b lim: 105 exec/s: 0 rss: 71Mb L: 54/54 MS: 3 ChangeByte-InsertByte-InsertRepeatedBytes- 00:07:58.510 [2024-07-24 22:46:56.673638] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:721420288 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.510 [2024-07-24 22:46:56.673692] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:58.510 #12 NEW cov: 12195 ft: 13044 corp: 3/91b lim: 105 exec/s: 0 rss: 71Mb L: 36/54 MS: 2 ChangeByte-InsertRepeatedBytes- 00:07:58.769 [2024-07-24 22:46:56.723633] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.769 [2024-07-24 22:46:56.723659] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:58.769 [2024-07-24 22:46:56.723700] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.769 [2024-07-24 22:46:56.723713] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:58.769 #13 NEW cov: 12201 ft: 13361 corp: 4/145b lim: 105 exec/s: 0 rss: 71Mb L: 54/54 MS: 1 InsertRepeatedBytes- 00:07:58.769 [2024-07-24 22:46:56.763937] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18157383378766920699 len:64508 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.769 [2024-07-24 22:46:56.763962] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:58.769 [2024-07-24 22:46:56.764011] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.769 [2024-07-24 22:46:56.764023] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:58.769 [2024-07-24 22:46:56.764070] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:277055455363072 len:64508 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.769 [2024-07-24 22:46:56.764087] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:58.769 [2024-07-24 22:46:56.764136] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:18157383382357244923 len:64508 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.769 [2024-07-24 22:46:56.764149] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:58.769 #14 NEW cov: 12286 ft: 14174 corp: 5/233b lim: 105 exec/s: 0 rss: 72Mb L: 88/88 MS: 1 InsertRepeatedBytes- 00:07:58.769 [2024-07-24 22:46:56.823891] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.769 [2024-07-24 22:46:56.823917] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:58.769 [2024-07-24 22:46:56.823968] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.769 [2024-07-24 22:46:56.823981] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:58.769 #15 NEW cov: 12286 ft: 14263 corp: 6/287b lim: 105 exec/s: 0 rss: 72Mb L: 54/88 MS: 1 ShuffleBytes- 00:07:58.769 [2024-07-24 22:46:56.874271] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:8285492995570793467 len:64508 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.769 [2024-07-24 22:46:56.874295] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:58.769 [2024-07-24 22:46:56.874345] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.769 [2024-07-24 22:46:56.874354] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:58.769 [2024-07-24 22:46:56.874404] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:277055455363072 len:64508 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.769 [2024-07-24 22:46:56.874416] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:58.769 [2024-07-24 22:46:56.874464] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:18157383382357244923 len:64508 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.769 [2024-07-24 22:46:56.874477] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:58.769 #16 NEW cov: 12286 ft: 14356 corp: 7/375b lim: 105 exec/s: 0 rss: 72Mb L: 88/88 MS: 1 ChangeByte- 00:07:58.769 [2024-07-24 22:46:56.924163] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:184549120 len:14080 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.769 [2024-07-24 22:46:56.924187] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:58.769 [2024-07-24 22:46:56.924226] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.769 [2024-07-24 22:46:56.924239] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:58.769 #17 NEW cov: 12286 ft: 14425 corp: 8/429b lim: 105 exec/s: 0 rss: 72Mb L: 54/88 MS: 1 ChangeBinInt- 00:07:58.770 [2024-07-24 22:46:56.974348] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.770 [2024-07-24 22:46:56.974373] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:58.770 [2024-07-24 22:46:56.974413] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.770 [2024-07-24 22:46:56.974425] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:59.028 #18 NEW cov: 12286 ft: 14461 corp: 9/483b lim: 105 exec/s: 0 rss: 72Mb L: 54/88 MS: 1 ChangeBinInt- 00:07:59.028 [2024-07-24 22:46:57.014608] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18157383378766920699 len:64508 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.028 [2024-07-24 22:46:57.014632] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:59.028 [2024-07-24 22:46:57.014701] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.028 [2024-07-24 22:46:57.014713] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:59.029 [2024-07-24 22:46:57.014763] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:277055459491840 len:64508 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.029 [2024-07-24 22:46:57.014775] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:59.029 [2024-07-24 22:46:57.014827] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:18157383382357244923 len:64508 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.029 [2024-07-24 22:46:57.014839] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:59.029 #19 NEW cov: 12286 ft: 14505 corp: 10/571b lim: 105 exec/s: 0 rss: 72Mb L: 88/88 MS: 1 ChangeByte- 00:07:59.029 [2024-07-24 22:46:57.054526] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.029 [2024-07-24 22:46:57.054550] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:59.029 [2024-07-24 22:46:57.054610] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.029 [2024-07-24 22:46:57.054623] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:59.029 #20 NEW cov: 12286 ft: 14556 corp: 11/625b lim: 105 exec/s: 0 rss: 72Mb L: 54/88 MS: 1 ChangeBit- 00:07:59.029 [2024-07-24 22:46:57.104565] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:721420288 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.029 [2024-07-24 22:46:57.104590] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:59.029 #21 NEW cov: 12286 ft: 14599 corp: 12/654b lim: 105 exec/s: 0 rss: 72Mb L: 29/88 MS: 1 EraseBytes- 00:07:59.029 [2024-07-24 22:46:57.154823] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18157383378766920699 len:64508 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.029 [2024-07-24 22:46:57.154847] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:59.029 [2024-07-24 22:46:57.154904] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18157383382357244923 len:64508 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.029 [2024-07-24 22:46:57.154916] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:59.029 #22 NEW cov: 12286 ft: 14630 corp: 13/709b lim: 105 exec/s: 0 rss: 72Mb L: 55/88 MS: 1 InsertByte- 00:07:59.029 [2024-07-24 22:46:57.194938] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.029 [2024-07-24 22:46:57.194964] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:59.029 [2024-07-24 22:46:57.195007] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.029 [2024-07-24 22:46:57.195019] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:59.029 #23 NEW cov: 12286 ft: 14652 corp: 14/763b lim: 105 exec/s: 0 rss: 72Mb L: 54/88 MS: 1 ChangeBit- 00:07:59.287 [2024-07-24 22:46:57.245091] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:184549120 len:14080 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.287 [2024-07-24 22:46:57.245118] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:59.287 [2024-07-24 22:46:57.245182] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.287 [2024-07-24 22:46:57.245194] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:59.287 #24 NEW cov: 12286 ft: 14719 corp: 15/817b lim: 105 exec/s: 0 rss: 72Mb L: 54/88 MS: 1 ShuffleBytes- 00:07:59.287 [2024-07-24 22:46:57.295490] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18157383378766920699 len:64508 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.287 [2024-07-24 22:46:57.295514] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:59.287 [2024-07-24 22:46:57.295564] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.287 [2024-07-24 22:46:57.295574] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:59.287 [2024-07-24 22:46:57.295623] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:276750512685056 len:64508 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.287 [2024-07-24 22:46:57.295635] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:59.287 [2024-07-24 22:46:57.295683] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:18157383382357244923 len:64508 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.287 [2024-07-24 22:46:57.295696] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:59.287 #25 NEW cov: 12286 ft: 14729 corp: 16/906b lim: 105 exec/s: 0 rss: 72Mb L: 89/89 MS: 1 InsertByte- 00:07:59.287 [2024-07-24 22:46:57.335344] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:8285492995570793467 len:64508 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.287 [2024-07-24 22:46:57.335369] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:59.287 [2024-07-24 22:46:57.335412] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:64508 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.287 [2024-07-24 22:46:57.335425] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:59.287 #26 NEW cov: 12286 ft: 14750 corp: 17/953b lim: 105 exec/s: 0 rss: 72Mb L: 47/89 MS: 1 EraseBytes- 00:07:59.287 [2024-07-24 22:46:57.385500] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.287 [2024-07-24 22:46:57.385525] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:59.287 [2024-07-24 22:46:57.385569] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.287 [2024-07-24 22:46:57.385582] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:59.287 NEW_FUNC[1/1]: 0x1a887d0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:07:59.287 #27 NEW cov: 12309 ft: 14784 corp: 18/997b lim: 105 exec/s: 0 rss: 72Mb L: 44/89 MS: 1 EraseBytes- 00:07:59.287 [2024-07-24 22:46:57.425592] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:12514849900987264429 len:44462 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.287 [2024-07-24 22:46:57.425620] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:59.287 [2024-07-24 22:46:57.425673] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:12514849900987264429 len:44462 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.287 [2024-07-24 22:46:57.425687] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:59.287 #29 NEW cov: 12309 ft: 14786 corp: 19/1055b lim: 105 exec/s: 0 rss: 72Mb L: 58/89 MS: 2 ShuffleBytes-InsertRepeatedBytes- 00:07:59.287 [2024-07-24 22:46:57.465725] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.287 [2024-07-24 22:46:57.465750] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:59.287 [2024-07-24 22:46:57.465796] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.287 [2024-07-24 22:46:57.465809] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:59.287 #30 NEW cov: 12309 ft: 14793 corp: 20/1110b lim: 105 exec/s: 0 rss: 72Mb L: 55/89 MS: 1 CrossOver- 00:07:59.546 [2024-07-24 22:46:57.505711] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:721420288 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.546 [2024-07-24 22:46:57.505734] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:59.546 #31 NEW cov: 12309 ft: 14876 corp: 21/1134b lim: 105 exec/s: 31 rss: 72Mb L: 24/89 MS: 1 EraseBytes- 00:07:59.546 [2024-07-24 22:46:57.555950] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.546 [2024-07-24 22:46:57.555975] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:59.546 [2024-07-24 22:46:57.556019] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.546 [2024-07-24 22:46:57.556032] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:59.546 #32 NEW cov: 12309 ft: 14886 corp: 22/1189b lim: 105 exec/s: 32 rss: 73Mb L: 55/89 MS: 1 ShuffleBytes- 00:07:59.546 [2024-07-24 22:46:57.605966] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:1125900628262912 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.546 [2024-07-24 22:46:57.605989] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:59.546 #33 NEW cov: 12309 ft: 14918 corp: 23/1213b lim: 105 exec/s: 33 rss: 73Mb L: 24/89 MS: 1 ChangeBit- 00:07:59.546 [2024-07-24 22:46:57.656361] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:184549120 len:14080 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.546 [2024-07-24 22:46:57.656385] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:59.546 [2024-07-24 22:46:57.656452] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.546 [2024-07-24 22:46:57.656465] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:59.546 [2024-07-24 22:46:57.656516] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.546 [2024-07-24 22:46:57.656528] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:59.546 #34 NEW cov: 12309 ft: 15198 corp: 24/1285b lim: 105 exec/s: 34 rss: 73Mb L: 72/89 MS: 1 InsertRepeatedBytes- 00:07:59.546 [2024-07-24 22:46:57.706386] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:184549120 len:14080 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.546 [2024-07-24 22:46:57.706410] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:59.546 [2024-07-24 22:46:57.706472] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.546 [2024-07-24 22:46:57.706485] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:59.546 #35 NEW cov: 12309 ft: 15206 corp: 25/1339b lim: 105 exec/s: 35 rss: 73Mb L: 54/89 MS: 1 ShuffleBytes- 00:07:59.546 [2024-07-24 22:46:57.746378] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:9621242985945531781 len:34182 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.546 [2024-07-24 22:46:57.746401] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:59.804 #36 NEW cov: 12309 ft: 15265 corp: 26/1380b lim: 105 exec/s: 36 rss: 73Mb L: 41/89 MS: 1 InsertRepeatedBytes- 00:07:59.804 [2024-07-24 22:46:57.786633] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.804 [2024-07-24 22:46:57.786657] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:59.804 [2024-07-24 22:46:57.786705] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.804 [2024-07-24 22:46:57.786717] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:59.804 #37 NEW cov: 12309 ft: 15314 corp: 27/1434b lim: 105 exec/s: 37 rss: 73Mb L: 54/89 MS: 1 CopyPart- 00:07:59.804 [2024-07-24 22:46:57.826711] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.804 [2024-07-24 22:46:57.826736] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:59.804 [2024-07-24 22:46:57.826776] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.804 [2024-07-24 22:46:57.826789] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:59.804 #38 NEW cov: 12309 ft: 15334 corp: 28/1488b lim: 105 exec/s: 38 rss: 73Mb L: 54/89 MS: 1 CopyPart- 00:07:59.804 [2024-07-24 22:46:57.877120] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18157383378766920699 len:64508 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.804 [2024-07-24 22:46:57.877145] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:59.804 [2024-07-24 22:46:57.877195] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.804 [2024-07-24 22:46:57.877207] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:59.804 [2024-07-24 22:46:57.877258] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:277055455363072 len:64508 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.804 [2024-07-24 22:46:57.877270] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:59.804 [2024-07-24 22:46:57.877319] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:18158513697490467835 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.804 [2024-07-24 22:46:57.877334] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:59.804 #39 NEW cov: 12309 ft: 15349 corp: 29/1587b lim: 105 exec/s: 39 rss: 73Mb L: 99/99 MS: 1 CrossOver- 00:07:59.804 [2024-07-24 22:46:57.917242] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18157383378766920699 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.804 [2024-07-24 22:46:57.917278] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:59.804 [2024-07-24 22:46:57.917344] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.804 [2024-07-24 22:46:57.917355] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:59.804 [2024-07-24 22:46:57.917407] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:277055459491840 len:64508 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.804 [2024-07-24 22:46:57.917419] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:59.804 [2024-07-24 22:46:57.917471] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:18157383382357244923 len:64508 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.804 [2024-07-24 22:46:57.917485] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:59.804 #40 NEW cov: 12309 ft: 15388 corp: 30/1675b lim: 105 exec/s: 40 rss: 73Mb L: 88/99 MS: 1 ShuffleBytes- 00:07:59.804 [2024-07-24 22:46:57.967160] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:184549120 len:13824 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.804 [2024-07-24 22:46:57.967185] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:59.804 [2024-07-24 22:46:57.967224] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.804 [2024-07-24 22:46:57.967236] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:59.804 #41 NEW cov: 12309 ft: 15464 corp: 31/1729b lim: 105 exec/s: 41 rss: 73Mb L: 54/99 MS: 1 ChangeASCIIInt- 00:07:59.804 [2024-07-24 22:46:58.007260] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.804 [2024-07-24 22:46:58.007285] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:59.804 [2024-07-24 22:46:58.007331] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.804 [2024-07-24 22:46:58.007342] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:00.062 #42 NEW cov: 12309 ft: 15522 corp: 32/1783b lim: 105 exec/s: 42 rss: 73Mb L: 54/99 MS: 1 ChangeBit- 00:08:00.062 [2024-07-24 22:46:58.047607] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:14540374737705237449 len:51658 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.062 [2024-07-24 22:46:58.047632] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:00.062 [2024-07-24 22:46:58.047679] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:4227530752 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.062 [2024-07-24 22:46:58.047689] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:00.062 [2024-07-24 22:46:58.047741] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.062 [2024-07-24 22:46:58.047753] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:00.062 [2024-07-24 22:46:58.047802] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:18157383382357244923 len:64508 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.062 [2024-07-24 22:46:58.047814] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:00.062 #43 NEW cov: 12309 ft: 15562 corp: 33/1880b lim: 105 exec/s: 43 rss: 73Mb L: 97/99 MS: 1 InsertRepeatedBytes- 00:08:00.062 [2024-07-24 22:46:58.087532] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.062 [2024-07-24 22:46:58.087556] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:00.062 [2024-07-24 22:46:58.087598] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.062 [2024-07-24 22:46:58.087612] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:00.062 #44 NEW cov: 12309 ft: 15565 corp: 34/1934b lim: 105 exec/s: 44 rss: 73Mb L: 54/99 MS: 1 ShuffleBytes- 00:08:00.062 [2024-07-24 22:46:58.127491] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:3098476547184773063 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.062 [2024-07-24 22:46:58.127517] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:00.062 #49 NEW cov: 12309 ft: 15584 corp: 35/1964b lim: 105 exec/s: 49 rss: 73Mb L: 30/99 MS: 5 ChangeByte-CopyPart-CopyPart-InsertRepeatedBytes-CrossOver- 00:08:00.062 [2024-07-24 22:46:58.167998] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:8285492995570793467 len:64508 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.062 [2024-07-24 22:46:58.168022] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:00.062 [2024-07-24 22:46:58.168077] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.062 [2024-07-24 22:46:58.168088] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:00.062 [2024-07-24 22:46:58.168135] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:277055455363072 len:64508 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.062 [2024-07-24 22:46:58.168147] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:00.062 [2024-07-24 22:46:58.168196] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:18157383382357244923 len:64508 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.062 [2024-07-24 22:46:58.168208] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:00.062 #50 NEW cov: 12309 ft: 15591 corp: 36/2052b lim: 105 exec/s: 50 rss: 73Mb L: 88/99 MS: 1 ChangeBinInt- 00:08:00.062 [2024-07-24 22:46:58.207870] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:721420288 len:88 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.062 [2024-07-24 22:46:58.207894] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:00.062 [2024-07-24 22:46:58.247928] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:721420288 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.062 [2024-07-24 22:46:58.247955] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:00.320 #52 NEW cov: 12309 ft: 15597 corp: 37/2076b lim: 105 exec/s: 52 rss: 73Mb L: 24/99 MS: 2 ChangeByte-CopyPart- 00:08:00.320 [2024-07-24 22:46:58.288171] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:12514849900987264429 len:44462 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.320 [2024-07-24 22:46:58.288195] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:00.320 [2024-07-24 22:46:58.288248] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:12514849900987264429 len:44462 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.320 [2024-07-24 22:46:58.288262] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:00.320 #53 NEW cov: 12309 ft: 15598 corp: 38/2134b lim: 105 exec/s: 53 rss: 73Mb L: 58/99 MS: 1 ShuffleBytes- 00:08:00.320 [2024-07-24 22:46:58.338326] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.320 [2024-07-24 22:46:58.338350] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:00.320 [2024-07-24 22:46:58.338388] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.320 [2024-07-24 22:46:58.338401] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:00.320 #54 NEW cov: 12309 ft: 15600 corp: 39/2178b lim: 105 exec/s: 54 rss: 74Mb L: 44/99 MS: 1 ChangeByte- 00:08:00.320 [2024-07-24 22:46:58.388436] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.320 [2024-07-24 22:46:58.388459] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:00.320 [2024-07-24 22:46:58.388521] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.320 [2024-07-24 22:46:58.388534] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:00.321 #55 NEW cov: 12309 ft: 15612 corp: 40/2233b lim: 105 exec/s: 55 rss: 74Mb L: 55/99 MS: 1 ChangeByte- 00:08:00.321 [2024-07-24 22:46:58.428803] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18157383378766920699 len:64508 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.321 [2024-07-24 22:46:58.428827] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:00.321 [2024-07-24 22:46:58.428876] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.321 [2024-07-24 22:46:58.428887] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:00.321 [2024-07-24 22:46:58.428936] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:277055459491840 len:64508 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.321 [2024-07-24 22:46:58.428949] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:00.321 [2024-07-24 22:46:58.428999] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:289360691469747204 len:64508 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.321 [2024-07-24 22:46:58.429011] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:00.321 #56 NEW cov: 12309 ft: 15625 corp: 41/2321b lim: 105 exec/s: 56 rss: 74Mb L: 88/99 MS: 1 ChangeBinInt- 00:08:00.321 [2024-07-24 22:46:58.468682] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:184549120 len:13824 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.321 [2024-07-24 22:46:58.468707] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:00.321 [2024-07-24 22:46:58.468748] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.321 [2024-07-24 22:46:58.468760] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:00.321 [2024-07-24 22:46:58.518844] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:184549120 len:13824 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.321 [2024-07-24 22:46:58.518868] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:00.321 [2024-07-24 22:46:58.518931] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.321 [2024-07-24 22:46:58.518943] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:00.579 #58 NEW cov: 12309 ft: 15646 corp: 42/2375b lim: 105 exec/s: 29 rss: 74Mb L: 54/99 MS: 2 ShuffleBytes-ChangeBit- 00:08:00.579 #58 DONE cov: 12309 ft: 15646 corp: 42/2375b lim: 105 exec/s: 29 rss: 74Mb 00:08:00.579 Done 58 runs in 2 second(s) 00:08:00.579 22:46:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_16.conf /var/tmp/suppress_nvmf_fuzz 00:08:00.579 22:46:58 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:00.579 22:46:58 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:00.579 22:46:58 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 17 1 0x1 00:08:00.579 22:46:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=17 00:08:00.579 22:46:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:00.579 22:46:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:00.579 22:46:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 00:08:00.579 22:46:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_17.conf 00:08:00.579 22:46:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:00.579 22:46:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:00.579 22:46:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 17 00:08:00.579 22:46:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4417 00:08:00.579 22:46:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 00:08:00.579 22:46:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4417' 00:08:00.579 22:46:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4417"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:00.579 22:46:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:00.579 22:46:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:00.579 22:46:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4417' -c /tmp/fuzz_json_17.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 -Z 17 00:08:00.579 [2024-07-24 22:46:58.696412] Starting SPDK v24.09-pre git sha1 f41dbc235 / DPDK 24.03.0 initialization... 00:08:00.579 [2024-07-24 22:46:58.696499] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid485664 ] 00:08:00.579 EAL: No free 2048 kB hugepages reported on node 1 00:08:00.838 [2024-07-24 22:46:58.881402] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:00.838 [2024-07-24 22:46:58.948274] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:00.838 [2024-07-24 22:46:59.006893] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:00.838 [2024-07-24 22:46:59.023102] tcp.c:1007:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4417 *** 00:08:00.838 INFO: Running with entropic power schedule (0xFF, 100). 00:08:00.838 INFO: Seed: 2278735576 00:08:01.096 INFO: Loaded 1 modules (358985 inline 8-bit counters): 358985 [0x29c620c, 0x2a1dc55), 00:08:01.096 INFO: Loaded 1 PC tables (358985 PCs): 358985 [0x2a1dc58,0x2f980e8), 00:08:01.096 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 00:08:01.096 INFO: A corpus is not provided, starting from an empty corpus 00:08:01.096 #2 INITED exec/s: 0 rss: 63Mb 00:08:01.096 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:01.096 This may also happen if the target rejected all inputs we tried so far 00:08:01.096 [2024-07-24 22:46:59.078488] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:16706 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.096 [2024-07-24 22:46:59.078517] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:01.096 [2024-07-24 22:46:59.078580] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:4702111234474983745 len:16706 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.096 [2024-07-24 22:46:59.078593] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:01.096 NEW_FUNC[1/702]: 0x49dcc0 in fuzz_nvm_write_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:540 00:08:01.096 NEW_FUNC[2/702]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:01.096 #32 NEW cov: 12102 ft: 12074 corp: 2/49b lim: 120 exec/s: 0 rss: 70Mb L: 48/48 MS: 5 CrossOver-ChangeBit-InsertRepeatedBytes-InsertByte-InsertRepeatedBytes- 00:08:01.096 [2024-07-24 22:46:59.228958] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:16706 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.096 [2024-07-24 22:46:59.228996] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:01.096 [2024-07-24 22:46:59.229056] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:4702111234474983745 len:16706 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.096 [2024-07-24 22:46:59.229070] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:01.096 #33 NEW cov: 12216 ft: 12552 corp: 3/97b lim: 120 exec/s: 0 rss: 70Mb L: 48/48 MS: 1 ShuffleBytes- 00:08:01.096 [2024-07-24 22:46:59.289027] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:5208492444341520456 len:18505 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.096 [2024-07-24 22:46:59.289055] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:01.096 [2024-07-24 22:46:59.289096] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:5208492444341520456 len:18505 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.096 [2024-07-24 22:46:59.289126] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:01.354 #34 NEW cov: 12222 ft: 12879 corp: 4/158b lim: 120 exec/s: 0 rss: 70Mb L: 61/61 MS: 1 InsertRepeatedBytes- 00:08:01.354 [2024-07-24 22:46:59.328985] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:5208492443938867272 len:18505 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.354 [2024-07-24 22:46:59.329011] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:01.354 #36 NEW cov: 12307 ft: 13954 corp: 5/186b lim: 120 exec/s: 0 rss: 71Mb L: 28/61 MS: 2 ChangeByte-CrossOver- 00:08:01.354 [2024-07-24 22:46:59.369064] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:5208492443938867272 len:18505 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.354 [2024-07-24 22:46:59.369095] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:01.354 #37 NEW cov: 12307 ft: 14209 corp: 6/214b lim: 120 exec/s: 0 rss: 71Mb L: 28/61 MS: 1 ChangeBinInt- 00:08:01.354 [2024-07-24 22:46:59.419188] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:5208492443938867272 len:18505 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.354 [2024-07-24 22:46:59.419214] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:01.355 #38 NEW cov: 12307 ft: 14252 corp: 7/242b lim: 120 exec/s: 0 rss: 71Mb L: 28/61 MS: 1 ChangeByte- 00:08:01.355 [2024-07-24 22:46:59.459292] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:5208492443934148680 len:18505 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.355 [2024-07-24 22:46:59.459318] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:01.355 #39 NEW cov: 12307 ft: 14395 corp: 8/271b lim: 120 exec/s: 0 rss: 71Mb L: 29/61 MS: 1 InsertByte- 00:08:01.355 [2024-07-24 22:46:59.509683] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:16896 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.355 [2024-07-24 22:46:59.509709] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:01.355 [2024-07-24 22:46:59.509750] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:4702111234474983745 len:16706 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.355 [2024-07-24 22:46:59.509763] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:01.355 #40 NEW cov: 12307 ft: 14421 corp: 9/319b lim: 120 exec/s: 0 rss: 71Mb L: 48/61 MS: 1 CopyPart- 00:08:01.355 [2024-07-24 22:46:59.550098] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:5208492443938867272 len:18505 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.355 [2024-07-24 22:46:59.550124] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:01.355 [2024-07-24 22:46:59.550194] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.355 [2024-07-24 22:46:59.550208] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:01.355 [2024-07-24 22:46:59.550263] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.355 [2024-07-24 22:46:59.550276] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:01.355 [2024-07-24 22:46:59.550331] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.355 [2024-07-24 22:46:59.550354] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:01.613 #41 NEW cov: 12307 ft: 14901 corp: 10/415b lim: 120 exec/s: 0 rss: 71Mb L: 96/96 MS: 1 InsertRepeatedBytes- 00:08:01.613 [2024-07-24 22:46:59.609947] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:38466 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.613 [2024-07-24 22:46:59.609972] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:01.613 [2024-07-24 22:46:59.610029] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:4702111234474983745 len:16706 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.613 [2024-07-24 22:46:59.610043] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:01.613 #42 NEW cov: 12307 ft: 15013 corp: 11/463b lim: 120 exec/s: 0 rss: 71Mb L: 48/96 MS: 1 ChangeByte- 00:08:01.613 [2024-07-24 22:46:59.650008] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:16706 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.613 [2024-07-24 22:46:59.650032] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:01.613 [2024-07-24 22:46:59.650093] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:4702111234474983745 len:16706 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.613 [2024-07-24 22:46:59.650108] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:01.613 #43 NEW cov: 12307 ft: 15063 corp: 12/511b lim: 120 exec/s: 0 rss: 71Mb L: 48/96 MS: 1 ChangeBit- 00:08:01.613 [2024-07-24 22:46:59.690152] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:16706 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.613 [2024-07-24 22:46:59.690176] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:01.613 [2024-07-24 22:46:59.690239] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:4702111234474983745 len:16706 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.613 [2024-07-24 22:46:59.690253] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:01.613 #44 NEW cov: 12307 ft: 15151 corp: 13/560b lim: 120 exec/s: 0 rss: 71Mb L: 49/96 MS: 1 InsertByte- 00:08:01.613 [2024-07-24 22:46:59.740235] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:5208492444341547848 len:18505 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.613 [2024-07-24 22:46:59.740262] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:01.613 [2024-07-24 22:46:59.740303] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:5208492444341520456 len:18505 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.613 [2024-07-24 22:46:59.740316] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:01.613 #45 NEW cov: 12307 ft: 15169 corp: 14/621b lim: 120 exec/s: 0 rss: 71Mb L: 61/96 MS: 1 ChangeBinInt- 00:08:01.613 [2024-07-24 22:46:59.790640] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:5208492444341547848 len:18505 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.613 [2024-07-24 22:46:59.790666] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:01.613 [2024-07-24 22:46:59.790721] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:5208492444341520456 len:18505 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.613 [2024-07-24 22:46:59.790734] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:01.613 [2024-07-24 22:46:59.790790] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.613 [2024-07-24 22:46:59.790803] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:01.872 #46 NEW cov: 12307 ft: 15475 corp: 15/703b lim: 120 exec/s: 0 rss: 72Mb L: 82/96 MS: 1 InsertRepeatedBytes- 00:08:01.872 [2024-07-24 22:46:59.840413] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:16896 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.872 [2024-07-24 22:46:59.840438] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:01.872 #47 NEW cov: 12307 ft: 15489 corp: 16/740b lim: 120 exec/s: 0 rss: 72Mb L: 37/96 MS: 1 EraseBytes- 00:08:01.872 [2024-07-24 22:46:59.890707] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:16706 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.872 [2024-07-24 22:46:59.890733] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:01.872 [2024-07-24 22:46:59.890773] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:4702111234474983745 len:16706 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.872 [2024-07-24 22:46:59.890785] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:01.872 #48 NEW cov: 12307 ft: 15511 corp: 17/789b lim: 120 exec/s: 0 rss: 72Mb L: 49/96 MS: 1 CrossOver- 00:08:01.872 [2024-07-24 22:46:59.930656] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:5208492443934148680 len:18505 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.872 [2024-07-24 22:46:59.930681] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:01.872 NEW_FUNC[1/1]: 0x1a887d0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:08:01.872 #54 NEW cov: 12330 ft: 15603 corp: 18/818b lim: 120 exec/s: 0 rss: 72Mb L: 29/96 MS: 1 ShuffleBytes- 00:08:01.872 [2024-07-24 22:46:59.990942] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073709486128 len:38466 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.872 [2024-07-24 22:46:59.990966] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:01.872 [2024-07-24 22:46:59.991024] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:4702111234474983745 len:16706 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.872 [2024-07-24 22:46:59.991037] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:01.872 #55 NEW cov: 12330 ft: 15650 corp: 19/866b lim: 120 exec/s: 0 rss: 72Mb L: 48/96 MS: 1 ChangeBinInt- 00:08:01.872 [2024-07-24 22:47:00.040998] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:9206008623053406464 len:18505 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.872 [2024-07-24 22:47:00.041029] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:01.872 #56 NEW cov: 12330 ft: 15669 corp: 20/902b lim: 120 exec/s: 56 rss: 72Mb L: 36/96 MS: 1 CMP- DE: "\001\000\177\302P\021\200\270"- 00:08:02.131 [2024-07-24 22:47:00.081405] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:16896 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.131 [2024-07-24 22:47:00.081440] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:02.131 [2024-07-24 22:47:00.081478] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:4702111234474983745 len:2809 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.131 [2024-07-24 22:47:00.081492] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:02.131 [2024-07-24 22:47:00.081548] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:4702111234474983745 len:16706 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.131 [2024-07-24 22:47:00.081565] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:02.131 #57 NEW cov: 12330 ft: 15742 corp: 21/987b lim: 120 exec/s: 57 rss: 72Mb L: 85/96 MS: 1 CrossOver- 00:08:02.131 [2024-07-24 22:47:00.131375] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:5208492444341547848 len:18505 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.131 [2024-07-24 22:47:00.131400] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:02.131 [2024-07-24 22:47:00.131457] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:5208492444341520456 len:18505 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.131 [2024-07-24 22:47:00.131471] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:02.131 #58 NEW cov: 12330 ft: 15754 corp: 22/1048b lim: 120 exec/s: 58 rss: 72Mb L: 61/96 MS: 1 ChangeBit- 00:08:02.131 [2024-07-24 22:47:00.171646] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:5208492443150338227 len:18505 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.131 [2024-07-24 22:47:00.171672] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:02.131 [2024-07-24 22:47:00.171715] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:5208492444341520456 len:18505 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.131 [2024-07-24 22:47:00.171728] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:02.131 [2024-07-24 22:47:00.171783] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.131 [2024-07-24 22:47:00.171796] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:02.131 #60 NEW cov: 12330 ft: 15814 corp: 23/1131b lim: 120 exec/s: 60 rss: 72Mb L: 83/96 MS: 2 ChangeBinInt-CrossOver- 00:08:02.131 [2024-07-24 22:47:00.211459] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:16896 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.132 [2024-07-24 22:47:00.211484] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:02.132 #61 NEW cov: 12330 ft: 15823 corp: 24/1168b lim: 120 exec/s: 61 rss: 72Mb L: 37/96 MS: 1 ShuffleBytes- 00:08:02.132 [2024-07-24 22:47:00.251538] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:9206008623053406464 len:18505 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.132 [2024-07-24 22:47:00.251563] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:02.132 #62 NEW cov: 12330 ft: 15885 corp: 25/1204b lim: 120 exec/s: 62 rss: 72Mb L: 36/96 MS: 1 CopyPart- 00:08:02.132 [2024-07-24 22:47:00.302035] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:16896 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.132 [2024-07-24 22:47:00.302060] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:02.132 [2024-07-24 22:47:00.302121] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:4702111234474983745 len:2809 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.132 [2024-07-24 22:47:00.302134] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:02.132 [2024-07-24 22:47:00.302191] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:4702111234474983745 len:16706 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.132 [2024-07-24 22:47:00.302204] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:02.391 #63 NEW cov: 12330 ft: 15903 corp: 26/1289b lim: 120 exec/s: 63 rss: 72Mb L: 85/96 MS: 1 ShuffleBytes- 00:08:02.391 [2024-07-24 22:47:00.362225] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:16706 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.391 [2024-07-24 22:47:00.362250] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:02.391 [2024-07-24 22:47:00.362305] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744070509380095 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.391 [2024-07-24 22:47:00.362318] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:02.391 [2024-07-24 22:47:00.362373] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.391 [2024-07-24 22:47:00.362386] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:02.391 #64 NEW cov: 12330 ft: 15913 corp: 27/1377b lim: 120 exec/s: 64 rss: 72Mb L: 88/96 MS: 1 InsertRepeatedBytes- 00:08:02.391 [2024-07-24 22:47:00.401998] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:5208492443938867272 len:18505 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.391 [2024-07-24 22:47:00.402023] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:02.391 #65 NEW cov: 12330 ft: 15925 corp: 28/1405b lim: 120 exec/s: 65 rss: 72Mb L: 28/96 MS: 1 ChangeBit- 00:08:02.391 [2024-07-24 22:47:00.442089] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:9206008623053406464 len:51273 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.391 [2024-07-24 22:47:00.442115] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:02.391 #66 NEW cov: 12330 ft: 15962 corp: 29/1441b lim: 120 exec/s: 66 rss: 72Mb L: 36/96 MS: 1 ChangeBit- 00:08:02.391 [2024-07-24 22:47:00.482701] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:5208492443934148680 len:18505 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.391 [2024-07-24 22:47:00.482727] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:02.391 [2024-07-24 22:47:00.482785] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.391 [2024-07-24 22:47:00.482796] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:02.391 [2024-07-24 22:47:00.482851] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.391 [2024-07-24 22:47:00.482864] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:02.391 [2024-07-24 22:47:00.482920] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.391 [2024-07-24 22:47:00.482933] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:02.391 #67 NEW cov: 12330 ft: 16016 corp: 30/1549b lim: 120 exec/s: 67 rss: 72Mb L: 108/108 MS: 1 InsertRepeatedBytes- 00:08:02.391 [2024-07-24 22:47:00.522485] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:16706 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.391 [2024-07-24 22:47:00.522511] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:02.391 [2024-07-24 22:47:00.522555] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:4702111234474983745 len:16834 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.391 [2024-07-24 22:47:00.522568] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:02.391 #68 NEW cov: 12330 ft: 16060 corp: 31/1598b lim: 120 exec/s: 68 rss: 72Mb L: 49/108 MS: 1 ChangeBit- 00:08:02.391 [2024-07-24 22:47:00.572839] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:5208492444336867507 len:18505 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.391 [2024-07-24 22:47:00.572865] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:02.391 [2024-07-24 22:47:00.572905] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:5208492444341520456 len:18505 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.391 [2024-07-24 22:47:00.572917] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:02.391 [2024-07-24 22:47:00.572971] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.391 [2024-07-24 22:47:00.573001] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:02.651 #69 NEW cov: 12330 ft: 16068 corp: 32/1681b lim: 120 exec/s: 69 rss: 72Mb L: 83/108 MS: 1 ShuffleBytes- 00:08:02.651 [2024-07-24 22:47:00.622621] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:16706 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.651 [2024-07-24 22:47:00.622648] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:02.651 #70 NEW cov: 12330 ft: 16083 corp: 33/1721b lim: 120 exec/s: 70 rss: 72Mb L: 40/108 MS: 1 EraseBytes- 00:08:02.651 [2024-07-24 22:47:00.682966] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073709486128 len:38466 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.651 [2024-07-24 22:47:00.682992] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:02.651 [2024-07-24 22:47:00.683031] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:4702111234474983745 len:16706 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.651 [2024-07-24 22:47:00.683045] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:02.651 #71 NEW cov: 12330 ft: 16098 corp: 34/1769b lim: 120 exec/s: 71 rss: 72Mb L: 48/108 MS: 1 ChangeBit- 00:08:02.651 [2024-07-24 22:47:00.732963] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:5208465746417436744 len:18505 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.651 [2024-07-24 22:47:00.732989] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:02.651 #72 NEW cov: 12330 ft: 16106 corp: 35/1798b lim: 120 exec/s: 72 rss: 72Mb L: 29/108 MS: 1 CrossOver- 00:08:02.651 [2024-07-24 22:47:00.783226] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:16706 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.651 [2024-07-24 22:47:00.783251] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:02.651 [2024-07-24 22:47:00.783291] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:4702111234474983745 len:16706 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.651 [2024-07-24 22:47:00.783303] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:02.651 #73 NEW cov: 12330 ft: 16112 corp: 36/1847b lim: 120 exec/s: 73 rss: 73Mb L: 49/108 MS: 1 InsertByte- 00:08:02.651 [2024-07-24 22:47:00.823697] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:5208492444341547848 len:18505 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.651 [2024-07-24 22:47:00.823724] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:02.651 [2024-07-24 22:47:00.823794] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:5208492444341520456 len:18505 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.651 [2024-07-24 22:47:00.823807] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:02.651 [2024-07-24 22:47:00.823862] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:5208492446220568648 len:18505 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.651 [2024-07-24 22:47:00.823876] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:02.651 [2024-07-24 22:47:00.823931] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:5208492444341520456 len:18505 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.651 [2024-07-24 22:47:00.823943] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:02.910 #74 NEW cov: 12330 ft: 16138 corp: 37/1944b lim: 120 exec/s: 74 rss: 73Mb L: 97/108 MS: 1 CrossOver- 00:08:02.910 [2024-07-24 22:47:00.873665] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:5208492444336867507 len:18505 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.910 [2024-07-24 22:47:00.873690] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:02.910 [2024-07-24 22:47:00.873749] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:5208492444341520456 len:18505 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.910 [2024-07-24 22:47:00.873762] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:02.911 [2024-07-24 22:47:00.873818] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.911 [2024-07-24 22:47:00.873832] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:02.911 #75 NEW cov: 12330 ft: 16173 corp: 38/2027b lim: 120 exec/s: 75 rss: 73Mb L: 83/108 MS: 1 CrossOver- 00:08:02.911 [2024-07-24 22:47:00.923636] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:13258597302978740223 len:16706 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.911 [2024-07-24 22:47:00.923661] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:02.911 [2024-07-24 22:47:00.923699] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:4702111234474983745 len:16706 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.911 [2024-07-24 22:47:00.923712] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:02.911 #76 NEW cov: 12330 ft: 16181 corp: 39/2077b lim: 120 exec/s: 76 rss: 73Mb L: 50/108 MS: 1 InsertByte- 00:08:02.911 [2024-07-24 22:47:00.973622] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:16706 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.911 [2024-07-24 22:47:00.973646] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:02.911 #77 NEW cov: 12330 ft: 16194 corp: 40/2115b lim: 120 exec/s: 77 rss: 73Mb L: 38/108 MS: 1 InsertByte- 00:08:02.911 [2024-07-24 22:47:01.013900] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446743257665765375 len:16706 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.911 [2024-07-24 22:47:01.013928] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:02.911 [2024-07-24 22:47:01.013984] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:4702204696163516415 len:16706 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.911 [2024-07-24 22:47:01.013998] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:02.911 #78 NEW cov: 12330 ft: 16205 corp: 41/2184b lim: 120 exec/s: 78 rss: 73Mb L: 69/108 MS: 1 EraseBytes- 00:08:02.911 [2024-07-24 22:47:01.063894] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:5208492443934148680 len:18505 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.911 [2024-07-24 22:47:01.063919] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:02.911 #79 NEW cov: 12330 ft: 16292 corp: 42/2213b lim: 120 exec/s: 39 rss: 73Mb L: 29/108 MS: 1 ChangeByte- 00:08:02.911 #79 DONE cov: 12330 ft: 16292 corp: 42/2213b lim: 120 exec/s: 39 rss: 73Mb 00:08:02.911 ###### Recommended dictionary. ###### 00:08:02.911 "\001\000\177\302P\021\200\270" # Uses: 0 00:08:02.911 ###### End of recommended dictionary. ###### 00:08:02.911 Done 79 runs in 2 second(s) 00:08:03.170 22:47:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_17.conf /var/tmp/suppress_nvmf_fuzz 00:08:03.170 22:47:01 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:03.170 22:47:01 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:03.170 22:47:01 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 18 1 0x1 00:08:03.170 22:47:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=18 00:08:03.170 22:47:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:03.170 22:47:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:03.170 22:47:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 00:08:03.170 22:47:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_18.conf 00:08:03.170 22:47:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:03.170 22:47:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:03.170 22:47:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 18 00:08:03.170 22:47:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4418 00:08:03.170 22:47:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 00:08:03.170 22:47:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4418' 00:08:03.170 22:47:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4418"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:03.170 22:47:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:03.170 22:47:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:03.170 22:47:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4418' -c /tmp/fuzz_json_18.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 -Z 18 00:08:03.170 [2024-07-24 22:47:01.242370] Starting SPDK v24.09-pre git sha1 f41dbc235 / DPDK 24.03.0 initialization... 00:08:03.170 [2024-07-24 22:47:01.242454] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid486099 ] 00:08:03.170 EAL: No free 2048 kB hugepages reported on node 1 00:08:03.430 [2024-07-24 22:47:01.418027] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:03.430 [2024-07-24 22:47:01.482330] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.430 [2024-07-24 22:47:01.540815] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:03.430 [2024-07-24 22:47:01.557057] tcp.c:1007:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4418 *** 00:08:03.430 INFO: Running with entropic power schedule (0xFF, 100). 00:08:03.430 INFO: Seed: 518759239 00:08:03.430 INFO: Loaded 1 modules (358985 inline 8-bit counters): 358985 [0x29c620c, 0x2a1dc55), 00:08:03.430 INFO: Loaded 1 PC tables (358985 PCs): 358985 [0x2a1dc58,0x2f980e8), 00:08:03.430 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 00:08:03.430 INFO: A corpus is not provided, starting from an empty corpus 00:08:03.430 #2 INITED exec/s: 0 rss: 64Mb 00:08:03.430 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:03.430 This may also happen if the target rejected all inputs we tried so far 00:08:03.430 [2024-07-24 22:47:01.612462] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:03.430 [2024-07-24 22:47:01.612490] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:03.430 [2024-07-24 22:47:01.612544] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:03.430 [2024-07-24 22:47:01.612557] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:03.430 [2024-07-24 22:47:01.612608] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:03.430 [2024-07-24 22:47:01.612620] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:03.689 NEW_FUNC[1/700]: 0x4a15b0 in fuzz_nvm_write_zeroes_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:562 00:08:03.689 NEW_FUNC[2/700]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:03.689 #6 NEW cov: 12046 ft: 12045 corp: 2/78b lim: 100 exec/s: 0 rss: 71Mb L: 77/77 MS: 4 CopyPart-InsertByte-CrossOver-InsertRepeatedBytes- 00:08:03.689 [2024-07-24 22:47:01.763110] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:03.689 [2024-07-24 22:47:01.763162] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:03.689 [2024-07-24 22:47:01.763234] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:03.689 [2024-07-24 22:47:01.763255] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:03.689 [2024-07-24 22:47:01.763322] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:03.689 [2024-07-24 22:47:01.763342] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:03.689 #7 NEW cov: 12159 ft: 12646 corp: 3/155b lim: 100 exec/s: 0 rss: 71Mb L: 77/77 MS: 1 ChangeByte- 00:08:03.689 [2024-07-24 22:47:01.822814] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:03.689 [2024-07-24 22:47:01.822838] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:03.689 [2024-07-24 22:47:01.822878] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:03.689 [2024-07-24 22:47:01.822889] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:03.689 #8 NEW cov: 12165 ft: 13100 corp: 4/212b lim: 100 exec/s: 0 rss: 72Mb L: 57/77 MS: 1 CrossOver- 00:08:03.689 [2024-07-24 22:47:01.873070] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:03.689 [2024-07-24 22:47:01.873097] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:03.689 [2024-07-24 22:47:01.873167] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:03.689 [2024-07-24 22:47:01.873180] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:03.689 [2024-07-24 22:47:01.873229] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:03.689 [2024-07-24 22:47:01.873242] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:03.949 #9 NEW cov: 12250 ft: 13442 corp: 5/289b lim: 100 exec/s: 0 rss: 72Mb L: 77/77 MS: 1 ChangeBinInt- 00:08:03.949 [2024-07-24 22:47:01.913036] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:03.949 [2024-07-24 22:47:01.913062] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:03.949 [2024-07-24 22:47:01.913105] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:03.949 [2024-07-24 22:47:01.913118] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:03.949 #10 NEW cov: 12250 ft: 13577 corp: 6/346b lim: 100 exec/s: 0 rss: 72Mb L: 57/77 MS: 1 CrossOver- 00:08:03.949 [2024-07-24 22:47:01.963211] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:03.949 [2024-07-24 22:47:01.963236] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:03.949 [2024-07-24 22:47:01.963287] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:03.949 [2024-07-24 22:47:01.963299] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:03.949 #11 NEW cov: 12250 ft: 13620 corp: 7/396b lim: 100 exec/s: 0 rss: 72Mb L: 50/77 MS: 1 EraseBytes- 00:08:03.949 [2024-07-24 22:47:02.013342] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:03.949 [2024-07-24 22:47:02.013366] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:03.949 [2024-07-24 22:47:02.013402] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:03.949 [2024-07-24 22:47:02.013414] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:03.949 #12 NEW cov: 12250 ft: 13728 corp: 8/453b lim: 100 exec/s: 0 rss: 72Mb L: 57/77 MS: 1 CrossOver- 00:08:03.949 [2024-07-24 22:47:02.053453] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:03.949 [2024-07-24 22:47:02.053478] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:03.949 [2024-07-24 22:47:02.053533] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:03.949 [2024-07-24 22:47:02.053545] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:03.949 #13 NEW cov: 12250 ft: 13772 corp: 9/503b lim: 100 exec/s: 0 rss: 72Mb L: 50/77 MS: 1 ChangeByte- 00:08:03.949 [2024-07-24 22:47:02.103837] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:03.949 [2024-07-24 22:47:02.103860] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:03.949 [2024-07-24 22:47:02.103923] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:03.949 [2024-07-24 22:47:02.103933] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:03.949 [2024-07-24 22:47:02.103981] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:03.949 [2024-07-24 22:47:02.103997] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:03.949 [2024-07-24 22:47:02.104047] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:08:03.949 [2024-07-24 22:47:02.104059] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:03.949 #14 NEW cov: 12250 ft: 14056 corp: 10/601b lim: 100 exec/s: 0 rss: 72Mb L: 98/98 MS: 1 CopyPart- 00:08:03.949 [2024-07-24 22:47:02.143930] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:03.949 [2024-07-24 22:47:02.143954] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:03.949 [2024-07-24 22:47:02.144008] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:03.949 [2024-07-24 22:47:02.144020] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:03.949 [2024-07-24 22:47:02.144067] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:03.949 [2024-07-24 22:47:02.144083] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:03.949 [2024-07-24 22:47:02.144132] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:08:03.949 [2024-07-24 22:47:02.144145] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:04.209 #15 NEW cov: 12250 ft: 14112 corp: 11/697b lim: 100 exec/s: 0 rss: 72Mb L: 96/98 MS: 1 InsertRepeatedBytes- 00:08:04.209 [2024-07-24 22:47:02.183944] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:04.209 [2024-07-24 22:47:02.183969] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:04.209 [2024-07-24 22:47:02.184020] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:04.209 [2024-07-24 22:47:02.184032] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:04.209 [2024-07-24 22:47:02.184082] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:04.209 [2024-07-24 22:47:02.184094] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:04.209 #17 NEW cov: 12250 ft: 14122 corp: 12/776b lim: 100 exec/s: 0 rss: 72Mb L: 79/98 MS: 2 InsertRepeatedBytes-InsertRepeatedBytes- 00:08:04.209 [2024-07-24 22:47:02.224185] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:04.209 [2024-07-24 22:47:02.224209] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:04.209 [2024-07-24 22:47:02.224274] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:04.209 [2024-07-24 22:47:02.224284] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:04.209 [2024-07-24 22:47:02.224342] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:04.209 [2024-07-24 22:47:02.224354] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:04.209 [2024-07-24 22:47:02.224401] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:08:04.209 [2024-07-24 22:47:02.224413] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:04.209 #18 NEW cov: 12250 ft: 14149 corp: 13/872b lim: 100 exec/s: 0 rss: 72Mb L: 96/98 MS: 1 ChangeBit- 00:08:04.209 [2024-07-24 22:47:02.274224] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:04.209 [2024-07-24 22:47:02.274248] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:04.209 [2024-07-24 22:47:02.274326] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:04.209 [2024-07-24 22:47:02.274337] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:04.209 [2024-07-24 22:47:02.274388] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:04.209 [2024-07-24 22:47:02.274400] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:04.209 #19 NEW cov: 12250 ft: 14167 corp: 14/951b lim: 100 exec/s: 0 rss: 72Mb L: 79/98 MS: 1 ShuffleBytes- 00:08:04.209 [2024-07-24 22:47:02.324372] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:04.209 [2024-07-24 22:47:02.324397] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:04.209 [2024-07-24 22:47:02.324444] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:04.209 [2024-07-24 22:47:02.324455] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:04.209 [2024-07-24 22:47:02.324503] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:04.209 [2024-07-24 22:47:02.324515] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:04.209 #20 NEW cov: 12250 ft: 14213 corp: 15/1030b lim: 100 exec/s: 0 rss: 72Mb L: 79/98 MS: 1 CopyPart- 00:08:04.209 [2024-07-24 22:47:02.364548] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:04.209 [2024-07-24 22:47:02.364571] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:04.209 [2024-07-24 22:47:02.364636] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:04.209 [2024-07-24 22:47:02.364645] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:04.209 [2024-07-24 22:47:02.364694] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:04.209 [2024-07-24 22:47:02.364707] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:04.209 [2024-07-24 22:47:02.364756] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:08:04.209 [2024-07-24 22:47:02.364768] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:04.209 #21 NEW cov: 12250 ft: 14224 corp: 16/1126b lim: 100 exec/s: 0 rss: 72Mb L: 96/98 MS: 1 CopyPart- 00:08:04.209 [2024-07-24 22:47:02.404427] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:04.209 [2024-07-24 22:47:02.404451] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:04.209 [2024-07-24 22:47:02.404496] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:04.209 [2024-07-24 22:47:02.404507] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:04.468 #22 NEW cov: 12250 ft: 14276 corp: 17/1183b lim: 100 exec/s: 0 rss: 72Mb L: 57/98 MS: 1 CopyPart- 00:08:04.468 [2024-07-24 22:47:02.454622] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:04.468 [2024-07-24 22:47:02.454645] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:04.468 [2024-07-24 22:47:02.454699] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:04.468 [2024-07-24 22:47:02.454712] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:04.468 #23 NEW cov: 12250 ft: 14293 corp: 18/1240b lim: 100 exec/s: 0 rss: 72Mb L: 57/98 MS: 1 ShuffleBytes- 00:08:04.468 [2024-07-24 22:47:02.494749] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:04.468 [2024-07-24 22:47:02.494773] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:04.468 [2024-07-24 22:47:02.494809] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:04.468 [2024-07-24 22:47:02.494821] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:04.468 NEW_FUNC[1/1]: 0x1a887d0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:08:04.468 #24 NEW cov: 12273 ft: 14367 corp: 19/1280b lim: 100 exec/s: 0 rss: 72Mb L: 40/98 MS: 1 EraseBytes- 00:08:04.468 [2024-07-24 22:47:02.534937] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:04.468 [2024-07-24 22:47:02.534960] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:04.468 [2024-07-24 22:47:02.535009] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:04.468 [2024-07-24 22:47:02.535019] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:04.468 [2024-07-24 22:47:02.535069] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:04.468 [2024-07-24 22:47:02.535085] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:04.468 #25 NEW cov: 12273 ft: 14409 corp: 20/1359b lim: 100 exec/s: 0 rss: 73Mb L: 79/98 MS: 1 ChangeBinInt- 00:08:04.468 [2024-07-24 22:47:02.584966] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:04.468 [2024-07-24 22:47:02.584989] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:04.468 [2024-07-24 22:47:02.585025] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:04.468 [2024-07-24 22:47:02.585037] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:04.468 #26 NEW cov: 12273 ft: 14432 corp: 21/1416b lim: 100 exec/s: 26 rss: 73Mb L: 57/98 MS: 1 CopyPart- 00:08:04.468 [2024-07-24 22:47:02.635216] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:04.468 [2024-07-24 22:47:02.635239] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:04.468 [2024-07-24 22:47:02.635303] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:04.468 [2024-07-24 22:47:02.635323] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:04.468 [2024-07-24 22:47:02.635386] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:04.468 [2024-07-24 22:47:02.635398] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:04.468 #27 NEW cov: 12273 ft: 14454 corp: 22/1493b lim: 100 exec/s: 27 rss: 73Mb L: 77/98 MS: 1 CopyPart- 00:08:04.728 [2024-07-24 22:47:02.685336] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:04.728 [2024-07-24 22:47:02.685359] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:04.728 [2024-07-24 22:47:02.685427] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:04.728 [2024-07-24 22:47:02.685439] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:04.728 [2024-07-24 22:47:02.685487] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:04.728 [2024-07-24 22:47:02.685499] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:04.728 #28 NEW cov: 12273 ft: 14473 corp: 23/1572b lim: 100 exec/s: 28 rss: 73Mb L: 79/98 MS: 1 ChangeBinInt- 00:08:04.728 [2024-07-24 22:47:02.735459] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:04.728 [2024-07-24 22:47:02.735482] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:04.728 [2024-07-24 22:47:02.735546] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:04.728 [2024-07-24 22:47:02.735558] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:04.728 [2024-07-24 22:47:02.735607] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:04.728 [2024-07-24 22:47:02.735619] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:04.728 #29 NEW cov: 12273 ft: 14484 corp: 24/1633b lim: 100 exec/s: 29 rss: 73Mb L: 61/98 MS: 1 InsertRepeatedBytes- 00:08:04.728 [2024-07-24 22:47:02.775733] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:04.728 [2024-07-24 22:47:02.775757] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:04.728 [2024-07-24 22:47:02.775804] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:04.728 [2024-07-24 22:47:02.775813] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:04.728 [2024-07-24 22:47:02.775863] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:04.728 [2024-07-24 22:47:02.775875] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:04.728 [2024-07-24 22:47:02.775926] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:08:04.728 [2024-07-24 22:47:02.775938] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:04.728 #30 NEW cov: 12273 ft: 14488 corp: 25/1730b lim: 100 exec/s: 30 rss: 73Mb L: 97/98 MS: 1 CopyPart- 00:08:04.728 [2024-07-24 22:47:02.815733] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:04.728 [2024-07-24 22:47:02.815756] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:04.728 [2024-07-24 22:47:02.815823] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:04.728 [2024-07-24 22:47:02.815834] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:04.728 [2024-07-24 22:47:02.815885] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:04.728 [2024-07-24 22:47:02.815898] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:04.728 #31 NEW cov: 12273 ft: 14499 corp: 26/1807b lim: 100 exec/s: 31 rss: 73Mb L: 77/98 MS: 1 ChangeBinInt- 00:08:04.728 [2024-07-24 22:47:02.855858] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:04.728 [2024-07-24 22:47:02.855884] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:04.728 [2024-07-24 22:47:02.855938] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:04.728 [2024-07-24 22:47:02.855950] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:04.728 [2024-07-24 22:47:02.856000] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:04.728 [2024-07-24 22:47:02.856013] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:04.728 #32 NEW cov: 12273 ft: 14560 corp: 27/1886b lim: 100 exec/s: 32 rss: 73Mb L: 79/98 MS: 1 ChangeByte- 00:08:04.728 [2024-07-24 22:47:02.895821] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:04.728 [2024-07-24 22:47:02.895844] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:04.728 [2024-07-24 22:47:02.895882] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:04.728 [2024-07-24 22:47:02.895893] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:04.728 #33 NEW cov: 12273 ft: 14572 corp: 28/1935b lim: 100 exec/s: 33 rss: 73Mb L: 49/98 MS: 1 EraseBytes- 00:08:04.988 [2024-07-24 22:47:02.946107] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:04.988 [2024-07-24 22:47:02.946131] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:04.988 [2024-07-24 22:47:02.946179] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:04.988 [2024-07-24 22:47:02.946188] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:04.988 [2024-07-24 22:47:02.946236] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:04.988 [2024-07-24 22:47:02.946248] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:04.988 #34 NEW cov: 12273 ft: 14600 corp: 29/2014b lim: 100 exec/s: 34 rss: 74Mb L: 79/98 MS: 1 ChangeBit- 00:08:04.988 [2024-07-24 22:47:02.996384] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:04.988 [2024-07-24 22:47:02.996408] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:04.988 [2024-07-24 22:47:02.996474] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:04.988 [2024-07-24 22:47:02.996484] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:04.988 [2024-07-24 22:47:02.996532] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:04.988 [2024-07-24 22:47:02.996544] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:04.988 [2024-07-24 22:47:02.996594] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:08:04.988 [2024-07-24 22:47:02.996607] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:04.988 #35 NEW cov: 12273 ft: 14643 corp: 30/2111b lim: 100 exec/s: 35 rss: 74Mb L: 97/98 MS: 1 CopyPart- 00:08:04.988 [2024-07-24 22:47:03.046411] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:04.988 [2024-07-24 22:47:03.046435] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:04.988 [2024-07-24 22:47:03.046484] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:04.988 [2024-07-24 22:47:03.046499] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:04.988 [2024-07-24 22:47:03.046548] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:04.988 [2024-07-24 22:47:03.046560] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:04.988 #36 NEW cov: 12273 ft: 14648 corp: 31/2188b lim: 100 exec/s: 36 rss: 74Mb L: 77/98 MS: 1 ChangeBinInt- 00:08:04.988 [2024-07-24 22:47:03.086625] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:04.988 [2024-07-24 22:47:03.086647] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:04.988 [2024-07-24 22:47:03.086711] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:04.988 [2024-07-24 22:47:03.086721] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:04.988 [2024-07-24 22:47:03.086768] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:04.988 [2024-07-24 22:47:03.086780] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:04.988 [2024-07-24 22:47:03.086828] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:08:04.988 [2024-07-24 22:47:03.086840] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:04.988 #37 NEW cov: 12273 ft: 14653 corp: 32/2287b lim: 100 exec/s: 37 rss: 74Mb L: 99/99 MS: 1 CrossOver- 00:08:04.988 [2024-07-24 22:47:03.126589] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:04.988 [2024-07-24 22:47:03.126612] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:04.988 [2024-07-24 22:47:03.126673] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:04.988 [2024-07-24 22:47:03.126685] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:04.988 [2024-07-24 22:47:03.126734] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:04.988 [2024-07-24 22:47:03.126746] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:04.988 #38 NEW cov: 12273 ft: 14673 corp: 33/2366b lim: 100 exec/s: 38 rss: 74Mb L: 79/99 MS: 1 CopyPart- 00:08:04.988 [2024-07-24 22:47:03.176865] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:04.988 [2024-07-24 22:47:03.176888] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:04.988 [2024-07-24 22:47:03.176935] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:04.988 [2024-07-24 22:47:03.176945] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:04.988 [2024-07-24 22:47:03.176994] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:04.988 [2024-07-24 22:47:03.177022] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:04.988 [2024-07-24 22:47:03.177076] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:08:04.989 [2024-07-24 22:47:03.177089] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:05.248 [2024-07-24 22:47:03.216935] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:05.248 [2024-07-24 22:47:03.216960] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:05.248 [2024-07-24 22:47:03.217024] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:05.248 [2024-07-24 22:47:03.217033] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:05.248 [2024-07-24 22:47:03.217086] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:05.248 [2024-07-24 22:47:03.217098] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:05.248 [2024-07-24 22:47:03.217150] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:08:05.248 [2024-07-24 22:47:03.217162] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:05.248 #40 NEW cov: 12273 ft: 14736 corp: 34/2464b lim: 100 exec/s: 40 rss: 74Mb L: 98/99 MS: 2 ChangeByte-InsertByte- 00:08:05.248 [2024-07-24 22:47:03.256823] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:05.248 [2024-07-24 22:47:03.256864] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:05.248 [2024-07-24 22:47:03.256899] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:05.248 [2024-07-24 22:47:03.256911] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:05.248 #42 NEW cov: 12273 ft: 14811 corp: 35/2515b lim: 100 exec/s: 42 rss: 74Mb L: 51/99 MS: 2 CrossOver-CrossOver- 00:08:05.248 [2024-07-24 22:47:03.297069] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:05.248 [2024-07-24 22:47:03.297097] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:05.248 [2024-07-24 22:47:03.297145] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:05.248 [2024-07-24 22:47:03.297154] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:05.248 [2024-07-24 22:47:03.297206] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:05.248 [2024-07-24 22:47:03.297218] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:05.248 #43 NEW cov: 12273 ft: 14830 corp: 36/2594b lim: 100 exec/s: 43 rss: 74Mb L: 79/99 MS: 1 CopyPart- 00:08:05.248 [2024-07-24 22:47:03.337297] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:05.248 [2024-07-24 22:47:03.337320] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:05.248 [2024-07-24 22:47:03.337396] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:05.248 [2024-07-24 22:47:03.337405] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:05.248 [2024-07-24 22:47:03.337454] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:05.248 [2024-07-24 22:47:03.337465] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:05.248 [2024-07-24 22:47:03.337514] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:08:05.248 [2024-07-24 22:47:03.337526] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:05.248 #44 NEW cov: 12273 ft: 14845 corp: 37/2693b lim: 100 exec/s: 44 rss: 74Mb L: 99/99 MS: 1 CrossOver- 00:08:05.248 [2024-07-24 22:47:03.387325] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:05.248 [2024-07-24 22:47:03.387348] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:05.248 [2024-07-24 22:47:03.387400] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:05.248 [2024-07-24 22:47:03.387413] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:05.248 [2024-07-24 22:47:03.387462] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:05.248 [2024-07-24 22:47:03.387474] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:05.248 #45 NEW cov: 12273 ft: 14847 corp: 38/2770b lim: 100 exec/s: 45 rss: 74Mb L: 77/99 MS: 1 ShuffleBytes- 00:08:05.248 [2024-07-24 22:47:03.427550] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:05.248 [2024-07-24 22:47:03.427574] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:05.248 [2024-07-24 22:47:03.427621] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:05.248 [2024-07-24 22:47:03.427631] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:05.248 [2024-07-24 22:47:03.427678] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:05.249 [2024-07-24 22:47:03.427705] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:05.249 [2024-07-24 22:47:03.427755] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:08:05.249 [2024-07-24 22:47:03.427767] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:05.538 #46 NEW cov: 12273 ft: 14875 corp: 39/2869b lim: 100 exec/s: 46 rss: 74Mb L: 99/99 MS: 1 CMP- DE: "\377\377\377\000"- 00:08:05.538 [2024-07-24 22:47:03.477675] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:05.538 [2024-07-24 22:47:03.477699] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:05.538 [2024-07-24 22:47:03.477749] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:05.538 [2024-07-24 22:47:03.477760] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:05.538 [2024-07-24 22:47:03.477808] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:05.538 [2024-07-24 22:47:03.477819] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:05.538 [2024-07-24 22:47:03.477869] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:08:05.538 [2024-07-24 22:47:03.477881] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:05.539 #47 NEW cov: 12273 ft: 14882 corp: 40/2966b lim: 100 exec/s: 47 rss: 74Mb L: 97/99 MS: 1 ChangeBit- 00:08:05.539 [2024-07-24 22:47:03.527609] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:05.539 [2024-07-24 22:47:03.527633] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:05.539 [2024-07-24 22:47:03.527687] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:05.539 [2024-07-24 22:47:03.527700] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:05.539 #48 NEW cov: 12273 ft: 14888 corp: 41/3012b lim: 100 exec/s: 48 rss: 74Mb L: 46/99 MS: 1 EraseBytes- 00:08:05.539 [2024-07-24 22:47:03.567965] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:05.539 [2024-07-24 22:47:03.567990] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:05.539 [2024-07-24 22:47:03.568040] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:05.539 [2024-07-24 22:47:03.568049] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:05.539 [2024-07-24 22:47:03.568107] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:05.539 [2024-07-24 22:47:03.568119] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:05.539 [2024-07-24 22:47:03.568170] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:08:05.539 [2024-07-24 22:47:03.568182] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:05.539 #49 NEW cov: 12273 ft: 14904 corp: 42/3111b lim: 100 exec/s: 24 rss: 74Mb L: 99/99 MS: 1 ShuffleBytes- 00:08:05.539 #49 DONE cov: 12273 ft: 14904 corp: 42/3111b lim: 100 exec/s: 24 rss: 74Mb 00:08:05.539 ###### Recommended dictionary. ###### 00:08:05.539 "\377\377\377\000" # Uses: 0 00:08:05.539 ###### End of recommended dictionary. ###### 00:08:05.539 Done 49 runs in 2 second(s) 00:08:05.539 22:47:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_18.conf /var/tmp/suppress_nvmf_fuzz 00:08:05.854 22:47:03 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:05.854 22:47:03 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:05.854 22:47:03 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 19 1 0x1 00:08:05.854 22:47:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=19 00:08:05.854 22:47:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:05.854 22:47:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:05.854 22:47:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 00:08:05.854 22:47:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_19.conf 00:08:05.854 22:47:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:05.854 22:47:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:05.854 22:47:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 19 00:08:05.854 22:47:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4419 00:08:05.854 22:47:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 00:08:05.855 22:47:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4419' 00:08:05.855 22:47:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4419"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:05.855 22:47:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:05.855 22:47:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:05.855 22:47:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4419' -c /tmp/fuzz_json_19.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 -Z 19 00:08:05.855 [2024-07-24 22:47:03.762821] Starting SPDK v24.09-pre git sha1 f41dbc235 / DPDK 24.03.0 initialization... 00:08:05.855 [2024-07-24 22:47:03.762903] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid486528 ] 00:08:05.855 EAL: No free 2048 kB hugepages reported on node 1 00:08:05.855 [2024-07-24 22:47:04.019126] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:06.138 [2024-07-24 22:47:04.103302] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:06.138 [2024-07-24 22:47:04.161728] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:06.138 [2024-07-24 22:47:04.177958] tcp.c:1007:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4419 *** 00:08:06.138 INFO: Running with entropic power schedule (0xFF, 100). 00:08:06.138 INFO: Seed: 3138752918 00:08:06.138 INFO: Loaded 1 modules (358985 inline 8-bit counters): 358985 [0x29c620c, 0x2a1dc55), 00:08:06.138 INFO: Loaded 1 PC tables (358985 PCs): 358985 [0x2a1dc58,0x2f980e8), 00:08:06.138 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 00:08:06.138 INFO: A corpus is not provided, starting from an empty corpus 00:08:06.138 #2 INITED exec/s: 0 rss: 65Mb 00:08:06.138 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:06.138 This may also happen if the target rejected all inputs we tried so far 00:08:06.138 [2024-07-24 22:47:04.233267] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:0 len:1 00:08:06.138 [2024-07-24 22:47:04.233299] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:06.445 NEW_FUNC[1/700]: 0x4a4570 in fuzz_nvm_write_uncorrectable_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:582 00:08:06.445 NEW_FUNC[2/700]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:06.445 #19 NEW cov: 12024 ft: 12022 corp: 2/13b lim: 50 exec/s: 0 rss: 71Mb L: 12/12 MS: 2 ShuffleBytes-InsertRepeatedBytes- 00:08:06.445 [2024-07-24 22:47:04.415635] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:434041038169310726 len:1543 00:08:06.445 [2024-07-24 22:47:04.415684] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:06.445 [2024-07-24 22:47:04.415773] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:434041037028460038 len:1543 00:08:06.445 [2024-07-24 22:47:04.415792] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:06.445 [2024-07-24 22:47:04.415885] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:434041037028460038 len:1543 00:08:06.445 [2024-07-24 22:47:04.415903] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:06.445 #22 NEW cov: 12137 ft: 12915 corp: 3/45b lim: 50 exec/s: 0 rss: 71Mb L: 32/32 MS: 3 ChangeBit-ShuffleBytes-InsertRepeatedBytes- 00:08:06.445 [2024-07-24 22:47:04.465763] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:434041038169310726 len:1543 00:08:06.445 [2024-07-24 22:47:04.465789] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:06.445 [2024-07-24 22:47:04.465866] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:432352187168196102 len:1543 00:08:06.445 [2024-07-24 22:47:04.465883] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:06.445 [2024-07-24 22:47:04.465971] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:434041037028460038 len:1543 00:08:06.445 [2024-07-24 22:47:04.465985] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:06.445 #23 NEW cov: 12143 ft: 13159 corp: 4/78b lim: 50 exec/s: 0 rss: 71Mb L: 33/33 MS: 1 InsertByte- 00:08:06.445 [2024-07-24 22:47:04.526275] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:9910603678816504201 len:35210 00:08:06.445 [2024-07-24 22:47:04.526300] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:06.445 [2024-07-24 22:47:04.526406] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:9910603678816504201 len:35210 00:08:06.445 [2024-07-24 22:47:04.526424] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:06.445 [2024-07-24 22:47:04.526517] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:9910603678816504201 len:35210 00:08:06.445 [2024-07-24 22:47:04.526532] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:06.445 [2024-07-24 22:47:04.526617] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:9910603678816504201 len:35210 00:08:06.445 [2024-07-24 22:47:04.526634] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:06.445 #25 NEW cov: 12228 ft: 13650 corp: 5/120b lim: 50 exec/s: 0 rss: 71Mb L: 42/42 MS: 2 ChangeBinInt-InsertRepeatedBytes- 00:08:06.445 [2024-07-24 22:47:04.576385] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:434041038169310726 len:1543 00:08:06.445 [2024-07-24 22:47:04.576412] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:06.445 [2024-07-24 22:47:04.576493] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:432352187168196102 len:1543 00:08:06.445 [2024-07-24 22:47:04.576509] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:06.445 [2024-07-24 22:47:04.576590] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:434041037028458502 len:1543 00:08:06.445 [2024-07-24 22:47:04.576605] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:06.445 #26 NEW cov: 12228 ft: 13832 corp: 6/153b lim: 50 exec/s: 0 rss: 71Mb L: 33/42 MS: 1 CopyPart- 00:08:06.704 [2024-07-24 22:47:04.647047] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:9910603678816504201 len:35210 00:08:06.704 [2024-07-24 22:47:04.647079] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:06.704 [2024-07-24 22:47:04.647165] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:9910603678816504201 len:35210 00:08:06.704 [2024-07-24 22:47:04.647182] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:06.704 [2024-07-24 22:47:04.647263] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:9910603678816504201 len:35210 00:08:06.704 [2024-07-24 22:47:04.647280] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:06.704 [2024-07-24 22:47:04.647374] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:9910603678816504201 len:35210 00:08:06.704 [2024-07-24 22:47:04.647390] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:06.704 #27 NEW cov: 12228 ft: 13936 corp: 7/196b lim: 50 exec/s: 0 rss: 71Mb L: 43/43 MS: 1 CopyPart- 00:08:06.704 [2024-07-24 22:47:04.717373] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:9910603678816504201 len:35210 00:08:06.704 [2024-07-24 22:47:04.717408] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:06.705 [2024-07-24 22:47:04.717482] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:9910603678816504201 len:35210 00:08:06.705 [2024-07-24 22:47:04.717501] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:06.705 [2024-07-24 22:47:04.717588] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:9910603678816504201 len:35201 00:08:06.705 [2024-07-24 22:47:04.717606] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:06.705 [2024-07-24 22:47:04.717694] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:9910603678816504201 len:35210 00:08:06.705 [2024-07-24 22:47:04.717712] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:06.705 #28 NEW cov: 12228 ft: 14002 corp: 8/239b lim: 50 exec/s: 0 rss: 72Mb L: 43/43 MS: 1 ChangeBinInt- 00:08:06.705 [2024-07-24 22:47:04.786929] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:0 len:1 00:08:06.705 [2024-07-24 22:47:04.786958] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:06.705 #29 NEW cov: 12228 ft: 14098 corp: 9/254b lim: 50 exec/s: 0 rss: 72Mb L: 15/43 MS: 1 InsertRepeatedBytes- 00:08:06.705 [2024-07-24 22:47:04.857695] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:434041038169310726 len:5639 00:08:06.705 [2024-07-24 22:47:04.857726] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:06.705 [2024-07-24 22:47:04.857792] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:432352187168196102 len:1543 00:08:06.705 [2024-07-24 22:47:04.857810] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:06.705 [2024-07-24 22:47:04.857871] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:434041037028460038 len:1543 00:08:06.705 [2024-07-24 22:47:04.857887] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:06.705 #30 NEW cov: 12228 ft: 14151 corp: 10/287b lim: 50 exec/s: 0 rss: 72Mb L: 33/43 MS: 1 ChangeBit- 00:08:06.705 [2024-07-24 22:47:04.908148] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:9910603678816504201 len:35210 00:08:06.705 [2024-07-24 22:47:04.908176] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:06.705 [2024-07-24 22:47:04.908254] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:9910603614391994761 len:35210 00:08:06.705 [2024-07-24 22:47:04.908273] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:06.705 [2024-07-24 22:47:04.908354] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:9910603678816504201 len:35210 00:08:06.705 [2024-07-24 22:47:04.908373] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:06.705 [2024-07-24 22:47:04.908457] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:9910603678816504201 len:35210 00:08:06.705 [2024-07-24 22:47:04.908476] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:06.964 #31 NEW cov: 12228 ft: 14225 corp: 11/331b lim: 50 exec/s: 0 rss: 72Mb L: 44/44 MS: 1 InsertByte- 00:08:06.964 [2024-07-24 22:47:04.958184] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:9910603678816504201 len:35210 00:08:06.964 [2024-07-24 22:47:04.958216] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:06.964 [2024-07-24 22:47:04.958276] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:9910603678816504201 len:35210 00:08:06.964 [2024-07-24 22:47:04.958291] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:06.964 [2024-07-24 22:47:04.958379] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:9910603678816504201 len:35210 00:08:06.964 [2024-07-24 22:47:04.958394] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:06.964 #32 NEW cov: 12228 ft: 14238 corp: 12/366b lim: 50 exec/s: 0 rss: 72Mb L: 35/44 MS: 1 EraseBytes- 00:08:06.964 [2024-07-24 22:47:05.008213] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:9910603676685797769 len:35210 00:08:06.964 [2024-07-24 22:47:05.008240] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:06.964 [2024-07-24 22:47:05.008304] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:9910603678816504201 len:35210 00:08:06.964 [2024-07-24 22:47:05.008321] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:06.964 #36 NEW cov: 12228 ft: 14480 corp: 13/392b lim: 50 exec/s: 0 rss: 72Mb L: 26/44 MS: 4 CopyPart-EraseBytes-ShuffleBytes-CrossOver- 00:08:06.964 [2024-07-24 22:47:05.058984] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:434041038169310726 len:1538 00:08:06.964 [2024-07-24 22:47:05.059008] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:06.964 [2024-07-24 22:47:05.059113] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:1124826165018624 len:1543 00:08:06.964 [2024-07-24 22:47:05.059131] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:06.964 [2024-07-24 22:47:05.059210] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:434041037028460032 len:1543 00:08:06.964 [2024-07-24 22:47:05.059221] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:06.964 [2024-07-24 22:47:05.059304] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:434041036927796742 len:1543 00:08:06.964 [2024-07-24 22:47:05.059320] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:06.964 #37 NEW cov: 12228 ft: 14504 corp: 14/433b lim: 50 exec/s: 0 rss: 72Mb L: 41/44 MS: 1 CMP- DE: "\001\000\000\000\000\000\003\377"- 00:08:06.964 [2024-07-24 22:47:05.129222] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:9910603678816504201 len:35210 00:08:06.964 [2024-07-24 22:47:05.129248] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:06.964 [2024-07-24 22:47:05.129334] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:9910603678816504193 len:35210 00:08:06.964 [2024-07-24 22:47:05.129351] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:06.964 [2024-07-24 22:47:05.129443] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:9910603678816504201 len:35210 00:08:06.964 [2024-07-24 22:47:05.129461] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:06.964 [2024-07-24 22:47:05.129544] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:9910603678816504201 len:35210 00:08:06.964 [2024-07-24 22:47:05.129561] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:06.964 NEW_FUNC[1/1]: 0x1a887d0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:08:06.964 #38 NEW cov: 12251 ft: 14591 corp: 15/476b lim: 50 exec/s: 0 rss: 72Mb L: 43/44 MS: 1 ChangeBit- 00:08:07.224 [2024-07-24 22:47:05.179166] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:434041038169310726 len:5639 00:08:07.224 [2024-07-24 22:47:05.179196] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:07.224 [2024-07-24 22:47:05.179278] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:432352187168196102 len:1543 00:08:07.224 [2024-07-24 22:47:05.179291] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:07.224 [2024-07-24 22:47:05.179379] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:1010501789331883526 len:1543 00:08:07.224 [2024-07-24 22:47:05.179395] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:07.224 #39 NEW cov: 12251 ft: 14612 corp: 16/509b lim: 50 exec/s: 39 rss: 72Mb L: 33/44 MS: 1 ChangeBit- 00:08:07.224 [2024-07-24 22:47:05.239372] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:434041038169310726 len:1543 00:08:07.224 [2024-07-24 22:47:05.239399] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:07.224 [2024-07-24 22:47:05.239474] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:432352187168196102 len:1543 00:08:07.224 [2024-07-24 22:47:05.239491] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:07.224 [2024-07-24 22:47:05.239569] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:434041037028458502 len:1543 00:08:07.224 [2024-07-24 22:47:05.239586] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:07.224 #40 NEW cov: 12251 ft: 14631 corp: 17/542b lim: 50 exec/s: 40 rss: 72Mb L: 33/44 MS: 1 ShuffleBytes- 00:08:07.224 [2024-07-24 22:47:05.289567] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:434041038169310726 len:5639 00:08:07.224 [2024-07-24 22:47:05.289593] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:07.224 [2024-07-24 22:47:05.289672] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:432352187168196350 len:1543 00:08:07.224 [2024-07-24 22:47:05.289689] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:07.224 [2024-07-24 22:47:05.289776] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:434041037028460038 len:1543 00:08:07.224 [2024-07-24 22:47:05.289794] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:07.224 #41 NEW cov: 12251 ft: 14653 corp: 18/575b lim: 50 exec/s: 41 rss: 72Mb L: 33/44 MS: 1 ChangeBinInt- 00:08:07.224 [2024-07-24 22:47:05.339711] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:434041038169310726 len:5639 00:08:07.224 [2024-07-24 22:47:05.339739] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:07.224 [2024-07-24 22:47:05.339801] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:432352187168196102 len:1543 00:08:07.224 [2024-07-24 22:47:05.339817] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:07.224 [2024-07-24 22:47:05.339901] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:434041037028462086 len:1543 00:08:07.224 [2024-07-24 22:47:05.339916] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:07.224 #42 NEW cov: 12251 ft: 14664 corp: 19/608b lim: 50 exec/s: 42 rss: 72Mb L: 33/44 MS: 1 ChangeBit- 00:08:07.224 [2024-07-24 22:47:05.390190] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:434041038169310726 len:1538 00:08:07.224 [2024-07-24 22:47:05.390214] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:07.224 [2024-07-24 22:47:05.390310] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:2532201048571904 len:1543 00:08:07.224 [2024-07-24 22:47:05.390326] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:07.224 [2024-07-24 22:47:05.390407] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:434041037028460032 len:1543 00:08:07.224 [2024-07-24 22:47:05.390419] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:07.224 [2024-07-24 22:47:05.390505] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:434041036927796742 len:1543 00:08:07.224 [2024-07-24 22:47:05.390521] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:07.224 #43 NEW cov: 12251 ft: 14710 corp: 20/649b lim: 50 exec/s: 43 rss: 72Mb L: 41/44 MS: 1 ChangeByte- 00:08:07.483 [2024-07-24 22:47:05.450561] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:434041038169310726 len:1538 00:08:07.483 [2024-07-24 22:47:05.450586] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:07.483 [2024-07-24 22:47:05.450681] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:1124826165018624 len:1543 00:08:07.483 [2024-07-24 22:47:05.450699] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:07.483 [2024-07-24 22:47:05.450782] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:434041037028460032 len:1543 00:08:07.483 [2024-07-24 22:47:05.450796] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:07.483 [2024-07-24 22:47:05.450882] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:434315884769969670 len:65536 00:08:07.483 [2024-07-24 22:47:05.450896] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:07.483 #44 NEW cov: 12251 ft: 14719 corp: 21/697b lim: 50 exec/s: 44 rss: 72Mb L: 48/48 MS: 1 InsertRepeatedBytes- 00:08:07.483 [2024-07-24 22:47:05.500580] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:9910603678816504201 len:35210 00:08:07.483 [2024-07-24 22:47:05.500607] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:07.483 [2024-07-24 22:47:05.500677] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:9910603678816504201 len:35210 00:08:07.483 [2024-07-24 22:47:05.500698] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:07.483 [2024-07-24 22:47:05.500760] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:9910603678816504201 len:35210 00:08:07.483 [2024-07-24 22:47:05.500774] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:07.483 #45 NEW cov: 12251 ft: 14756 corp: 22/734b lim: 50 exec/s: 45 rss: 72Mb L: 37/48 MS: 1 CopyPart- 00:08:07.484 [2024-07-24 22:47:05.560767] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:434041038169310726 len:1543 00:08:07.484 [2024-07-24 22:47:05.560793] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:07.484 [2024-07-24 22:47:05.560889] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:432352187168196102 len:1543 00:08:07.484 [2024-07-24 22:47:05.560905] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:07.484 [2024-07-24 22:47:05.560988] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:10233873826186657798 len:1543 00:08:07.484 [2024-07-24 22:47:05.561004] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:07.484 #46 NEW cov: 12251 ft: 14782 corp: 23/767b lim: 50 exec/s: 46 rss: 72Mb L: 33/48 MS: 1 ChangeByte- 00:08:07.484 [2024-07-24 22:47:05.620979] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:434041038169310726 len:1543 00:08:07.484 [2024-07-24 22:47:05.621004] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:07.484 [2024-07-24 22:47:05.621093] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:432352187168196102 len:1543 00:08:07.484 [2024-07-24 22:47:05.621118] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:07.484 [2024-07-24 22:47:05.621202] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:10233873817596723206 len:1543 00:08:07.484 [2024-07-24 22:47:05.621219] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:07.484 #52 NEW cov: 12251 ft: 14785 corp: 24/800b lim: 50 exec/s: 52 rss: 72Mb L: 33/48 MS: 1 ChangeBit- 00:08:07.484 [2024-07-24 22:47:05.681500] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:9910603678816504201 len:35210 00:08:07.484 [2024-07-24 22:47:05.681524] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:07.484 [2024-07-24 22:47:05.681622] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:9910603678816504201 len:35210 00:08:07.484 [2024-07-24 22:47:05.681641] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:07.484 [2024-07-24 22:47:05.681721] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:9910603678816504201 len:35201 00:08:07.484 [2024-07-24 22:47:05.681737] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:07.484 [2024-07-24 22:47:05.681821] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:9910603678816504201 len:35255 00:08:07.484 [2024-07-24 22:47:05.681838] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:07.742 #53 NEW cov: 12251 ft: 14796 corp: 25/846b lim: 50 exec/s: 53 rss: 72Mb L: 46/48 MS: 1 InsertRepeatedBytes- 00:08:07.742 [2024-07-24 22:47:05.741756] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:9910603678816504201 len:35210 00:08:07.742 [2024-07-24 22:47:05.741782] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:07.742 [2024-07-24 22:47:05.741862] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:9910603678816504193 len:35210 00:08:07.742 [2024-07-24 22:47:05.741877] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:07.742 [2024-07-24 22:47:05.741959] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:9910603678816504201 len:35210 00:08:07.742 [2024-07-24 22:47:05.741972] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:07.742 [2024-07-24 22:47:05.742058] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:9910603678816504201 len:35210 00:08:07.742 [2024-07-24 22:47:05.742075] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:07.742 #54 NEW cov: 12251 ft: 14821 corp: 26/889b lim: 50 exec/s: 54 rss: 72Mb L: 43/48 MS: 1 ChangeBit- 00:08:07.742 [2024-07-24 22:47:05.801836] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:434041038169310726 len:5639 00:08:07.742 [2024-07-24 22:47:05.801862] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:07.742 [2024-07-24 22:47:05.801934] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:432352187168196102 len:1543 00:08:07.742 [2024-07-24 22:47:05.801951] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:07.742 [2024-07-24 22:47:05.802042] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:1010501789331883526 len:1543 00:08:07.742 [2024-07-24 22:47:05.802057] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:07.742 #55 NEW cov: 12251 ft: 14826 corp: 27/922b lim: 50 exec/s: 55 rss: 72Mb L: 33/48 MS: 1 ChangeBit- 00:08:07.742 [2024-07-24 22:47:05.861592] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:0 len:10 00:08:07.743 [2024-07-24 22:47:05.861618] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:07.743 #56 NEW cov: 12251 ft: 14880 corp: 28/934b lim: 50 exec/s: 56 rss: 72Mb L: 12/48 MS: 1 ChangeBinInt- 00:08:07.743 [2024-07-24 22:47:05.912006] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:9910603676685783689 len:35210 00:08:07.743 [2024-07-24 22:47:05.912033] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:07.743 [2024-07-24 22:47:05.912125] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:9910603678816504201 len:35210 00:08:07.743 [2024-07-24 22:47:05.912147] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:08.002 #57 NEW cov: 12251 ft: 14897 corp: 29/961b lim: 50 exec/s: 57 rss: 72Mb L: 27/48 MS: 1 InsertByte- 00:08:08.002 [2024-07-24 22:47:05.983033] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:9910603678816504201 len:35210 00:08:08.002 [2024-07-24 22:47:05.983065] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:08.002 [2024-07-24 22:47:05.983139] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:9910603678816504201 len:35210 00:08:08.002 [2024-07-24 22:47:05.983155] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:08.002 [2024-07-24 22:47:05.983217] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:9910603678816504201 len:35210 00:08:08.002 [2024-07-24 22:47:05.983234] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:08.002 [2024-07-24 22:47:05.983330] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:9910603678816504201 len:35210 00:08:08.002 [2024-07-24 22:47:05.983347] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:08.002 #58 NEW cov: 12251 ft: 15026 corp: 30/1003b lim: 50 exec/s: 58 rss: 72Mb L: 42/48 MS: 1 CopyPart- 00:08:08.002 [2024-07-24 22:47:06.033356] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:434041038169310726 len:1543 00:08:08.002 [2024-07-24 22:47:06.033385] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:08.002 [2024-07-24 22:47:06.033472] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:432352187168196102 len:1543 00:08:08.002 [2024-07-24 22:47:06.033487] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:08.002 [2024-07-24 22:47:06.033570] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:10233873826186657798 len:1543 00:08:08.002 [2024-07-24 22:47:06.033584] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:08.002 #59 NEW cov: 12251 ft: 15029 corp: 31/1037b lim: 50 exec/s: 59 rss: 72Mb L: 34/48 MS: 1 InsertByte- 00:08:08.002 [2024-07-24 22:47:06.083716] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:434041038169310726 len:1543 00:08:08.002 [2024-07-24 22:47:06.083741] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:08.002 [2024-07-24 22:47:06.083827] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:449522160747546118 len:1543 00:08:08.002 [2024-07-24 22:47:06.083844] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:08.002 [2024-07-24 22:47:06.083925] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:434041037028460038 len:1543 00:08:08.002 [2024-07-24 22:47:06.083941] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:08.002 #60 NEW cov: 12251 ft: 15039 corp: 32/1070b lim: 50 exec/s: 60 rss: 72Mb L: 33/48 MS: 1 ChangeByte- 00:08:08.002 [2024-07-24 22:47:06.133928] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:9910603676685790345 len:35210 00:08:08.002 [2024-07-24 22:47:06.133955] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:08.002 [2024-07-24 22:47:06.134012] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:9910603678816504201 len:35210 00:08:08.002 [2024-07-24 22:47:06.134028] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:08.002 #61 NEW cov: 12251 ft: 15050 corp: 33/1097b lim: 50 exec/s: 61 rss: 73Mb L: 27/48 MS: 1 ChangeByte- 00:08:08.002 [2024-07-24 22:47:06.204054] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:0 len:10 00:08:08.002 [2024-07-24 22:47:06.204094] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:08.261 #62 NEW cov: 12251 ft: 15056 corp: 34/1109b lim: 50 exec/s: 31 rss: 73Mb L: 12/48 MS: 1 ShuffleBytes- 00:08:08.261 #62 DONE cov: 12251 ft: 15056 corp: 34/1109b lim: 50 exec/s: 31 rss: 73Mb 00:08:08.261 ###### Recommended dictionary. ###### 00:08:08.261 "\001\000\000\000\000\000\003\377" # Uses: 0 00:08:08.261 ###### End of recommended dictionary. ###### 00:08:08.261 Done 62 runs in 2 second(s) 00:08:08.261 22:47:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_19.conf /var/tmp/suppress_nvmf_fuzz 00:08:08.261 22:47:06 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:08.261 22:47:06 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:08.261 22:47:06 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 20 1 0x1 00:08:08.261 22:47:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=20 00:08:08.261 22:47:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:08.261 22:47:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:08.261 22:47:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 00:08:08.261 22:47:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_20.conf 00:08:08.261 22:47:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:08.261 22:47:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:08.261 22:47:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 20 00:08:08.261 22:47:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4420 00:08:08.261 22:47:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 00:08:08.261 22:47:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4420' 00:08:08.261 22:47:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4420"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:08.261 22:47:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:08.261 22:47:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:08.261 22:47:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4420' -c /tmp/fuzz_json_20.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 -Z 20 00:08:08.261 [2024-07-24 22:47:06.405234] Starting SPDK v24.09-pre git sha1 f41dbc235 / DPDK 24.03.0 initialization... 00:08:08.261 [2024-07-24 22:47:06.405291] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid486974 ] 00:08:08.261 EAL: No free 2048 kB hugepages reported on node 1 00:08:08.520 [2024-07-24 22:47:06.658139] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:08.779 [2024-07-24 22:47:06.741899] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:08.779 [2024-07-24 22:47:06.800411] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:08.779 [2024-07-24 22:47:06.816660] tcp.c:1007:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:08:08.779 INFO: Running with entropic power schedule (0xFF, 100). 00:08:08.779 INFO: Seed: 1481800235 00:08:08.779 INFO: Loaded 1 modules (358985 inline 8-bit counters): 358985 [0x29c620c, 0x2a1dc55), 00:08:08.779 INFO: Loaded 1 PC tables (358985 PCs): 358985 [0x2a1dc58,0x2f980e8), 00:08:08.779 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 00:08:08.779 INFO: A corpus is not provided, starting from an empty corpus 00:08:08.779 #2 INITED exec/s: 0 rss: 65Mb 00:08:08.779 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:08.779 This may also happen if the target rejected all inputs we tried so far 00:08:08.779 [2024-07-24 22:47:06.865814] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:08.779 [2024-07-24 22:47:06.865848] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:08.779 [2024-07-24 22:47:06.865896] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:08.779 [2024-07-24 22:47:06.865913] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:08.779 [2024-07-24 22:47:06.865973] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:08.779 [2024-07-24 22:47:06.865989] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:09.039 NEW_FUNC[1/701]: 0x4a6130 in fuzz_nvm_reservation_acquire_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:597 00:08:09.039 NEW_FUNC[2/701]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:09.039 #11 NEW cov: 12081 ft: 12062 corp: 2/62b lim: 90 exec/s: 0 rss: 71Mb L: 61/61 MS: 4 ChangeBinInt-CopyPart-ChangeByte-InsertRepeatedBytes- 00:08:09.039 [2024-07-24 22:47:07.016418] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:09.039 [2024-07-24 22:47:07.016472] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:09.039 [2024-07-24 22:47:07.016552] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:09.039 [2024-07-24 22:47:07.016576] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:09.039 [2024-07-24 22:47:07.016651] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:09.039 [2024-07-24 22:47:07.016673] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:09.039 NEW_FUNC[1/1]: 0xf99160 in rte_get_tsc_cycles /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/include/rte_cycles.h:61 00:08:09.039 #12 NEW cov: 12195 ft: 12615 corp: 3/123b lim: 90 exec/s: 0 rss: 71Mb L: 61/61 MS: 1 ChangeBinInt- 00:08:09.039 [2024-07-24 22:47:07.076226] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:09.039 [2024-07-24 22:47:07.076253] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:09.039 [2024-07-24 22:47:07.076294] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:09.039 [2024-07-24 22:47:07.076307] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:09.039 [2024-07-24 22:47:07.076359] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:09.039 [2024-07-24 22:47:07.076373] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:09.039 #13 NEW cov: 12201 ft: 12938 corp: 4/184b lim: 90 exec/s: 0 rss: 71Mb L: 61/61 MS: 1 ChangeByte- 00:08:09.039 [2024-07-24 22:47:07.116349] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:09.039 [2024-07-24 22:47:07.116376] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:09.039 [2024-07-24 22:47:07.116424] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:09.039 [2024-07-24 22:47:07.116440] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:09.039 [2024-07-24 22:47:07.116494] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:09.039 [2024-07-24 22:47:07.116509] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:09.039 #14 NEW cov: 12286 ft: 13234 corp: 5/241b lim: 90 exec/s: 0 rss: 71Mb L: 57/61 MS: 1 CrossOver- 00:08:09.039 [2024-07-24 22:47:07.156601] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:09.039 [2024-07-24 22:47:07.156626] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:09.039 [2024-07-24 22:47:07.156684] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:09.039 [2024-07-24 22:47:07.156697] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:09.039 [2024-07-24 22:47:07.156750] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:09.039 [2024-07-24 22:47:07.156764] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:09.039 [2024-07-24 22:47:07.156816] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:08:09.039 [2024-07-24 22:47:07.156828] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:09.039 #15 NEW cov: 12286 ft: 13619 corp: 6/326b lim: 90 exec/s: 0 rss: 72Mb L: 85/85 MS: 1 CopyPart- 00:08:09.039 [2024-07-24 22:47:07.206585] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:09.039 [2024-07-24 22:47:07.206611] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:09.039 [2024-07-24 22:47:07.206658] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:09.039 [2024-07-24 22:47:07.206673] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:09.039 [2024-07-24 22:47:07.206729] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:09.039 [2024-07-24 22:47:07.206741] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:09.039 #16 NEW cov: 12286 ft: 13752 corp: 7/383b lim: 90 exec/s: 0 rss: 72Mb L: 57/85 MS: 1 ChangeBit- 00:08:09.297 [2024-07-24 22:47:07.246733] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:09.297 [2024-07-24 22:47:07.246760] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:09.297 [2024-07-24 22:47:07.246813] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:09.297 [2024-07-24 22:47:07.246827] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:09.297 [2024-07-24 22:47:07.246880] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:09.297 [2024-07-24 22:47:07.246894] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:09.297 #17 NEW cov: 12286 ft: 13869 corp: 8/444b lim: 90 exec/s: 0 rss: 72Mb L: 61/85 MS: 1 ChangeBinInt- 00:08:09.297 [2024-07-24 22:47:07.296526] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:09.297 [2024-07-24 22:47:07.296554] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:09.297 #19 NEW cov: 12286 ft: 14748 corp: 9/469b lim: 90 exec/s: 0 rss: 72Mb L: 25/85 MS: 2 ChangeByte-InsertRepeatedBytes- 00:08:09.297 [2024-07-24 22:47:07.336937] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:09.297 [2024-07-24 22:47:07.336963] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:09.297 [2024-07-24 22:47:07.337007] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:09.297 [2024-07-24 22:47:07.337020] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:09.297 [2024-07-24 22:47:07.337071] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:09.297 [2024-07-24 22:47:07.337091] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:09.297 #20 NEW cov: 12286 ft: 14810 corp: 10/530b lim: 90 exec/s: 0 rss: 72Mb L: 61/85 MS: 1 ChangeByte- 00:08:09.297 [2024-07-24 22:47:07.387102] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:09.297 [2024-07-24 22:47:07.387128] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:09.298 [2024-07-24 22:47:07.387179] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:09.298 [2024-07-24 22:47:07.387193] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:09.298 [2024-07-24 22:47:07.387247] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:09.298 [2024-07-24 22:47:07.387260] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:09.298 #21 NEW cov: 12286 ft: 14841 corp: 11/591b lim: 90 exec/s: 0 rss: 72Mb L: 61/85 MS: 1 CopyPart- 00:08:09.298 [2024-07-24 22:47:07.427214] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:09.298 [2024-07-24 22:47:07.427240] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:09.298 [2024-07-24 22:47:07.427289] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:09.298 [2024-07-24 22:47:07.427302] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:09.298 [2024-07-24 22:47:07.427357] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:09.298 [2024-07-24 22:47:07.427370] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:09.298 #22 NEW cov: 12286 ft: 14872 corp: 12/652b lim: 90 exec/s: 0 rss: 72Mb L: 61/85 MS: 1 ChangeBit- 00:08:09.298 [2024-07-24 22:47:07.477176] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:09.298 [2024-07-24 22:47:07.477202] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:09.298 [2024-07-24 22:47:07.477246] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:09.298 [2024-07-24 22:47:07.477260] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:09.298 #23 NEW cov: 12286 ft: 15251 corp: 13/705b lim: 90 exec/s: 0 rss: 72Mb L: 53/85 MS: 1 EraseBytes- 00:08:09.557 [2024-07-24 22:47:07.517611] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:09.557 [2024-07-24 22:47:07.517638] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:09.557 [2024-07-24 22:47:07.517685] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:09.557 [2024-07-24 22:47:07.517702] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:09.557 [2024-07-24 22:47:07.517754] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:09.557 [2024-07-24 22:47:07.517767] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:09.557 [2024-07-24 22:47:07.517821] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:08:09.557 [2024-07-24 22:47:07.517833] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:09.557 #24 NEW cov: 12286 ft: 15270 corp: 14/785b lim: 90 exec/s: 0 rss: 72Mb L: 80/85 MS: 1 InsertRepeatedBytes- 00:08:09.557 [2024-07-24 22:47:07.557556] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:09.557 [2024-07-24 22:47:07.557582] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:09.557 [2024-07-24 22:47:07.557630] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:09.557 [2024-07-24 22:47:07.557644] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:09.557 [2024-07-24 22:47:07.557697] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:09.557 [2024-07-24 22:47:07.557710] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:09.557 #25 NEW cov: 12286 ft: 15291 corp: 15/846b lim: 90 exec/s: 0 rss: 72Mb L: 61/85 MS: 1 ChangeBinInt- 00:08:09.557 [2024-07-24 22:47:07.597521] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:09.557 [2024-07-24 22:47:07.597547] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:09.557 [2024-07-24 22:47:07.597587] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:09.557 [2024-07-24 22:47:07.597599] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:09.557 #27 NEW cov: 12286 ft: 15352 corp: 16/883b lim: 90 exec/s: 0 rss: 72Mb L: 37/85 MS: 2 InsertByte-InsertRepeatedBytes- 00:08:09.557 [2024-07-24 22:47:07.637815] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:09.557 [2024-07-24 22:47:07.637841] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:09.557 [2024-07-24 22:47:07.637886] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:09.557 [2024-07-24 22:47:07.637898] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:09.557 [2024-07-24 22:47:07.637951] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:09.557 [2024-07-24 22:47:07.637965] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:09.557 #28 NEW cov: 12286 ft: 15414 corp: 17/944b lim: 90 exec/s: 0 rss: 72Mb L: 61/85 MS: 1 CopyPart- 00:08:09.557 [2024-07-24 22:47:07.687942] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:09.557 [2024-07-24 22:47:07.687966] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:09.557 [2024-07-24 22:47:07.688010] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:09.557 [2024-07-24 22:47:07.688026] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:09.557 [2024-07-24 22:47:07.688081] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:09.557 [2024-07-24 22:47:07.688094] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:09.557 #29 NEW cov: 12286 ft: 15433 corp: 18/1005b lim: 90 exec/s: 0 rss: 72Mb L: 61/85 MS: 1 ChangeByte- 00:08:09.557 [2024-07-24 22:47:07.728085] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:09.557 [2024-07-24 22:47:07.728110] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:09.557 [2024-07-24 22:47:07.728163] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:09.557 [2024-07-24 22:47:07.728176] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:09.557 [2024-07-24 22:47:07.728229] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:09.557 [2024-07-24 22:47:07.728243] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:09.557 NEW_FUNC[1/1]: 0x1a887d0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:08:09.557 #30 NEW cov: 12309 ft: 15462 corp: 19/1066b lim: 90 exec/s: 0 rss: 72Mb L: 61/85 MS: 1 ChangeByte- 00:08:09.817 [2024-07-24 22:47:07.768050] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:09.817 [2024-07-24 22:47:07.768079] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:09.817 [2024-07-24 22:47:07.768118] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:09.817 [2024-07-24 22:47:07.768131] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:09.817 #32 NEW cov: 12309 ft: 15516 corp: 20/1115b lim: 90 exec/s: 0 rss: 72Mb L: 49/85 MS: 2 ChangeByte-CrossOver- 00:08:09.817 [2024-07-24 22:47:07.808470] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:09.817 [2024-07-24 22:47:07.808496] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:09.817 [2024-07-24 22:47:07.808552] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:09.817 [2024-07-24 22:47:07.808565] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:09.817 [2024-07-24 22:47:07.808617] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:09.817 [2024-07-24 22:47:07.808647] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:09.817 [2024-07-24 22:47:07.808697] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:08:09.817 [2024-07-24 22:47:07.808710] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:09.817 #33 NEW cov: 12309 ft: 15532 corp: 21/1195b lim: 90 exec/s: 0 rss: 72Mb L: 80/85 MS: 1 ChangeByte- 00:08:09.817 [2024-07-24 22:47:07.858460] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:09.817 [2024-07-24 22:47:07.858485] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:09.817 [2024-07-24 22:47:07.858529] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:09.817 [2024-07-24 22:47:07.858542] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:09.817 [2024-07-24 22:47:07.858599] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:09.817 [2024-07-24 22:47:07.858613] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:09.817 #34 NEW cov: 12309 ft: 15542 corp: 22/1252b lim: 90 exec/s: 34 rss: 72Mb L: 57/85 MS: 1 ChangeBit- 00:08:09.817 [2024-07-24 22:47:07.908735] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:09.817 [2024-07-24 22:47:07.908761] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:09.817 [2024-07-24 22:47:07.908818] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:09.817 [2024-07-24 22:47:07.908831] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:09.817 [2024-07-24 22:47:07.908882] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:09.817 [2024-07-24 22:47:07.908895] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:09.817 [2024-07-24 22:47:07.908948] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:08:09.817 [2024-07-24 22:47:07.908962] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:09.817 #35 NEW cov: 12309 ft: 15553 corp: 23/1337b lim: 90 exec/s: 35 rss: 72Mb L: 85/85 MS: 1 InsertRepeatedBytes- 00:08:09.817 [2024-07-24 22:47:07.959053] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:09.817 [2024-07-24 22:47:07.959082] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:09.817 [2024-07-24 22:47:07.959132] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:09.817 [2024-07-24 22:47:07.959143] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:09.817 [2024-07-24 22:47:07.959196] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:09.817 [2024-07-24 22:47:07.959209] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:09.817 [2024-07-24 22:47:07.959259] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:08:09.817 [2024-07-24 22:47:07.959271] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:09.817 [2024-07-24 22:47:07.959325] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:4 nsid:0 00:08:09.817 [2024-07-24 22:47:07.959339] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:08:09.817 #36 NEW cov: 12309 ft: 15597 corp: 24/1427b lim: 90 exec/s: 36 rss: 72Mb L: 90/90 MS: 1 InsertRepeatedBytes- 00:08:09.817 [2024-07-24 22:47:08.008812] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:09.817 [2024-07-24 22:47:08.008838] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:09.817 [2024-07-24 22:47:08.008886] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:09.817 [2024-07-24 22:47:08.008899] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:09.817 [2024-07-24 22:47:08.008952] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:09.817 [2024-07-24 22:47:08.008969] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:10.076 #37 NEW cov: 12309 ft: 15615 corp: 25/1488b lim: 90 exec/s: 37 rss: 72Mb L: 61/90 MS: 1 ChangeByte- 00:08:10.076 [2024-07-24 22:47:08.049125] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:10.076 [2024-07-24 22:47:08.049151] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:10.076 [2024-07-24 22:47:08.049205] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:10.076 [2024-07-24 22:47:08.049219] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:10.076 [2024-07-24 22:47:08.049270] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:10.076 [2024-07-24 22:47:08.049283] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:10.076 [2024-07-24 22:47:08.049337] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:08:10.076 [2024-07-24 22:47:08.049349] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:10.077 #40 NEW cov: 12309 ft: 15623 corp: 26/1560b lim: 90 exec/s: 40 rss: 72Mb L: 72/90 MS: 3 ChangeBit-ChangeBit-InsertRepeatedBytes- 00:08:10.077 [2024-07-24 22:47:08.089285] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:10.077 [2024-07-24 22:47:08.089311] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:10.077 [2024-07-24 22:47:08.089377] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:10.077 [2024-07-24 22:47:08.089390] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:10.077 [2024-07-24 22:47:08.089445] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:10.077 [2024-07-24 22:47:08.089458] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:10.077 [2024-07-24 22:47:08.089512] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:08:10.077 [2024-07-24 22:47:08.089525] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:10.077 #41 NEW cov: 12309 ft: 15644 corp: 27/1645b lim: 90 exec/s: 41 rss: 73Mb L: 85/90 MS: 1 ChangeByte- 00:08:10.077 [2024-07-24 22:47:08.138937] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:10.077 [2024-07-24 22:47:08.138962] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:10.077 #45 NEW cov: 12309 ft: 15652 corp: 28/1665b lim: 90 exec/s: 45 rss: 73Mb L: 20/90 MS: 4 CopyPart-ChangeBit-ShuffleBytes-InsertRepeatedBytes- 00:08:10.077 [2024-07-24 22:47:08.179376] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:10.077 [2024-07-24 22:47:08.179401] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:10.077 [2024-07-24 22:47:08.179454] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:10.077 [2024-07-24 22:47:08.179467] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:10.077 [2024-07-24 22:47:08.179520] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:10.077 [2024-07-24 22:47:08.179535] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:10.077 #46 NEW cov: 12309 ft: 15657 corp: 29/1726b lim: 90 exec/s: 46 rss: 73Mb L: 61/90 MS: 1 CopyPart- 00:08:10.077 [2024-07-24 22:47:08.229497] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:10.077 [2024-07-24 22:47:08.229523] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:10.077 [2024-07-24 22:47:08.229577] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:10.077 [2024-07-24 22:47:08.229589] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:10.077 [2024-07-24 22:47:08.229644] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:10.077 [2024-07-24 22:47:08.229657] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:10.077 #47 NEW cov: 12309 ft: 15674 corp: 30/1795b lim: 90 exec/s: 47 rss: 73Mb L: 69/90 MS: 1 CMP- DE: "\005E\"X\321\177\000\000"- 00:08:10.077 [2024-07-24 22:47:08.279501] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:10.077 [2024-07-24 22:47:08.279526] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:10.077 [2024-07-24 22:47:08.279567] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:10.077 [2024-07-24 22:47:08.279581] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:10.336 #48 NEW cov: 12309 ft: 15710 corp: 31/1832b lim: 90 exec/s: 48 rss: 73Mb L: 37/90 MS: 1 ShuffleBytes- 00:08:10.336 [2024-07-24 22:47:08.329756] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:10.336 [2024-07-24 22:47:08.329781] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:10.336 [2024-07-24 22:47:08.329829] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:10.336 [2024-07-24 22:47:08.329842] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:10.336 [2024-07-24 22:47:08.329895] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:10.336 [2024-07-24 22:47:08.329925] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:10.336 #49 NEW cov: 12309 ft: 15717 corp: 32/1901b lim: 90 exec/s: 49 rss: 73Mb L: 69/90 MS: 1 CMP- DE: "\000\027c\244\016\334B4"- 00:08:10.336 [2024-07-24 22:47:08.369914] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:10.336 [2024-07-24 22:47:08.369940] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:10.336 [2024-07-24 22:47:08.369983] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:10.336 [2024-07-24 22:47:08.369996] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:10.336 [2024-07-24 22:47:08.370050] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:10.336 [2024-07-24 22:47:08.370064] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:10.336 #50 NEW cov: 12309 ft: 15725 corp: 33/1962b lim: 90 exec/s: 50 rss: 73Mb L: 61/90 MS: 1 CMP- DE: "\364\377\377\377"- 00:08:10.336 [2024-07-24 22:47:08.410186] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:10.336 [2024-07-24 22:47:08.410212] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:10.336 [2024-07-24 22:47:08.410266] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:10.336 [2024-07-24 22:47:08.410279] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:10.336 [2024-07-24 22:47:08.410332] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:10.336 [2024-07-24 22:47:08.410346] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:10.336 [2024-07-24 22:47:08.410400] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:08:10.336 [2024-07-24 22:47:08.410413] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:10.336 #51 NEW cov: 12309 ft: 15729 corp: 34/2034b lim: 90 exec/s: 51 rss: 73Mb L: 72/90 MS: 1 CopyPart- 00:08:10.336 [2024-07-24 22:47:08.450120] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:10.336 [2024-07-24 22:47:08.450145] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:10.336 [2024-07-24 22:47:08.450191] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:10.336 [2024-07-24 22:47:08.450204] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:10.336 [2024-07-24 22:47:08.450258] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:10.336 [2024-07-24 22:47:08.450272] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:10.336 #52 NEW cov: 12309 ft: 15734 corp: 35/2095b lim: 90 exec/s: 52 rss: 73Mb L: 61/90 MS: 1 ChangeBinInt- 00:08:10.336 [2024-07-24 22:47:08.500457] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:10.336 [2024-07-24 22:47:08.500482] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:10.336 [2024-07-24 22:47:08.500534] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:10.336 [2024-07-24 22:47:08.500548] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:10.336 [2024-07-24 22:47:08.500600] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:10.336 [2024-07-24 22:47:08.500614] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:10.336 [2024-07-24 22:47:08.500665] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:08:10.337 [2024-07-24 22:47:08.500678] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:10.337 #53 NEW cov: 12309 ft: 15773 corp: 36/2175b lim: 90 exec/s: 53 rss: 73Mb L: 80/90 MS: 1 ChangeBit- 00:08:10.597 [2024-07-24 22:47:08.550767] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:10.597 [2024-07-24 22:47:08.550792] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:10.597 [2024-07-24 22:47:08.550845] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:10.597 [2024-07-24 22:47:08.550856] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:10.597 [2024-07-24 22:47:08.550909] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:10.597 [2024-07-24 22:47:08.550926] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:10.597 [2024-07-24 22:47:08.550978] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:08:10.597 [2024-07-24 22:47:08.550992] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:10.597 [2024-07-24 22:47:08.551045] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:4 nsid:0 00:08:10.597 [2024-07-24 22:47:08.551058] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:08:10.597 #54 NEW cov: 12309 ft: 15787 corp: 37/2265b lim: 90 exec/s: 54 rss: 73Mb L: 90/90 MS: 1 ChangeByte- 00:08:10.597 [2024-07-24 22:47:08.600711] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:10.597 [2024-07-24 22:47:08.600737] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:10.597 [2024-07-24 22:47:08.600792] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:10.597 [2024-07-24 22:47:08.600806] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:10.597 [2024-07-24 22:47:08.600861] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:10.597 [2024-07-24 22:47:08.600874] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:10.597 [2024-07-24 22:47:08.600928] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:08:10.597 [2024-07-24 22:47:08.600940] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:10.597 #55 NEW cov: 12309 ft: 15797 corp: 38/2342b lim: 90 exec/s: 55 rss: 74Mb L: 77/90 MS: 1 PersAutoDict- DE: "\000\027c\244\016\334B4"- 00:08:10.597 [2024-07-24 22:47:08.650420] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:10.597 [2024-07-24 22:47:08.650446] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:10.597 #56 NEW cov: 12309 ft: 15862 corp: 39/2367b lim: 90 exec/s: 56 rss: 74Mb L: 25/90 MS: 1 EraseBytes- 00:08:10.597 [2024-07-24 22:47:08.690978] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:10.597 [2024-07-24 22:47:08.691003] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:10.597 [2024-07-24 22:47:08.691061] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:10.597 [2024-07-24 22:47:08.691078] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:10.597 [2024-07-24 22:47:08.691131] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:10.597 [2024-07-24 22:47:08.691145] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:10.597 [2024-07-24 22:47:08.691199] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:08:10.597 [2024-07-24 22:47:08.691211] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:10.597 #57 NEW cov: 12309 ft: 15902 corp: 40/2444b lim: 90 exec/s: 57 rss: 74Mb L: 77/90 MS: 1 ChangeBinInt- 00:08:10.597 [2024-07-24 22:47:08.740930] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:10.597 [2024-07-24 22:47:08.740956] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:10.597 [2024-07-24 22:47:08.740995] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:10.597 [2024-07-24 22:47:08.741008] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:10.597 [2024-07-24 22:47:08.741064] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:10.597 [2024-07-24 22:47:08.741082] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:10.597 #58 NEW cov: 12309 ft: 15929 corp: 41/2514b lim: 90 exec/s: 58 rss: 74Mb L: 70/90 MS: 1 CrossOver- 00:08:10.597 [2024-07-24 22:47:08.781063] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:10.597 [2024-07-24 22:47:08.781093] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:10.597 [2024-07-24 22:47:08.781146] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:10.597 [2024-07-24 22:47:08.781159] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:10.597 [2024-07-24 22:47:08.781213] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:10.597 [2024-07-24 22:47:08.781227] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:10.856 #59 NEW cov: 12309 ft: 15936 corp: 42/2571b lim: 90 exec/s: 59 rss: 74Mb L: 57/90 MS: 1 ChangeBinInt- 00:08:10.856 [2024-07-24 22:47:08.820877] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:10.856 [2024-07-24 22:47:08.820903] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:10.856 #60 NEW cov: 12309 ft: 16020 corp: 43/2595b lim: 90 exec/s: 30 rss: 74Mb L: 24/90 MS: 1 EraseBytes- 00:08:10.856 #60 DONE cov: 12309 ft: 16020 corp: 43/2595b lim: 90 exec/s: 30 rss: 74Mb 00:08:10.856 ###### Recommended dictionary. ###### 00:08:10.856 "\005E\"X\321\177\000\000" # Uses: 0 00:08:10.856 "\000\027c\244\016\334B4" # Uses: 1 00:08:10.856 "\364\377\377\377" # Uses: 0 00:08:10.856 ###### End of recommended dictionary. ###### 00:08:10.856 Done 60 runs in 2 second(s) 00:08:10.856 22:47:08 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_20.conf /var/tmp/suppress_nvmf_fuzz 00:08:10.856 22:47:08 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:10.856 22:47:08 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:10.856 22:47:08 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 21 1 0x1 00:08:10.856 22:47:08 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=21 00:08:10.856 22:47:08 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:10.856 22:47:08 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:10.856 22:47:08 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 00:08:10.856 22:47:08 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_21.conf 00:08:10.856 22:47:08 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:10.856 22:47:08 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:10.856 22:47:08 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 21 00:08:10.856 22:47:08 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4421 00:08:10.856 22:47:08 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 00:08:10.856 22:47:08 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4421' 00:08:10.856 22:47:08 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4421"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:10.856 22:47:08 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:10.856 22:47:08 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:10.857 22:47:08 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4421' -c /tmp/fuzz_json_21.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 -Z 21 00:08:10.857 [2024-07-24 22:47:09.015556] Starting SPDK v24.09-pre git sha1 f41dbc235 / DPDK 24.03.0 initialization... 00:08:10.857 [2024-07-24 22:47:09.015629] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid487402 ] 00:08:10.857 EAL: No free 2048 kB hugepages reported on node 1 00:08:11.115 [2024-07-24 22:47:09.266825] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:11.374 [2024-07-24 22:47:09.347735] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:11.374 [2024-07-24 22:47:09.406433] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:11.375 [2024-07-24 22:47:09.422674] tcp.c:1007:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4421 *** 00:08:11.375 INFO: Running with entropic power schedule (0xFF, 100). 00:08:11.375 INFO: Seed: 4089781867 00:08:11.375 INFO: Loaded 1 modules (358985 inline 8-bit counters): 358985 [0x29c620c, 0x2a1dc55), 00:08:11.375 INFO: Loaded 1 PC tables (358985 PCs): 358985 [0x2a1dc58,0x2f980e8), 00:08:11.375 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 00:08:11.375 INFO: A corpus is not provided, starting from an empty corpus 00:08:11.375 #2 INITED exec/s: 0 rss: 64Mb 00:08:11.375 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:11.375 This may also happen if the target rejected all inputs we tried so far 00:08:11.375 [2024-07-24 22:47:09.478437] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:11.375 [2024-07-24 22:47:09.478463] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:11.375 [2024-07-24 22:47:09.478520] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:11.375 [2024-07-24 22:47:09.478533] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:11.375 [2024-07-24 22:47:09.478591] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:11.375 [2024-07-24 22:47:09.478605] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:11.375 [2024-07-24 22:47:09.478663] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:11.375 [2024-07-24 22:47:09.478678] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:11.634 NEW_FUNC[1/702]: 0x4a9350 in fuzz_nvm_reservation_release_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:623 00:08:11.634 NEW_FUNC[2/702]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:11.634 #4 NEW cov: 12057 ft: 12044 corp: 2/43b lim: 50 exec/s: 0 rss: 70Mb L: 42/42 MS: 2 ChangeBit-InsertRepeatedBytes- 00:08:11.634 [2024-07-24 22:47:09.660622] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:11.634 [2024-07-24 22:47:09.660673] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:11.634 [2024-07-24 22:47:09.660750] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:11.634 [2024-07-24 22:47:09.660774] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:11.634 [2024-07-24 22:47:09.660870] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:11.634 [2024-07-24 22:47:09.660888] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:11.634 [2024-07-24 22:47:09.660993] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:11.634 [2024-07-24 22:47:09.661010] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:11.634 #5 NEW cov: 12170 ft: 12480 corp: 3/85b lim: 50 exec/s: 0 rss: 70Mb L: 42/42 MS: 1 ShuffleBytes- 00:08:11.634 [2024-07-24 22:47:09.730676] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:11.634 [2024-07-24 22:47:09.730700] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:11.634 [2024-07-24 22:47:09.730785] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:11.634 [2024-07-24 22:47:09.730798] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:11.634 [2024-07-24 22:47:09.730886] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:11.634 [2024-07-24 22:47:09.730899] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:11.634 [2024-07-24 22:47:09.730985] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:11.634 [2024-07-24 22:47:09.730999] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:11.634 #11 NEW cov: 12176 ft: 12773 corp: 4/127b lim: 50 exec/s: 0 rss: 70Mb L: 42/42 MS: 1 CrossOver- 00:08:11.634 [2024-07-24 22:47:09.790718] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:11.634 [2024-07-24 22:47:09.790742] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:11.634 [2024-07-24 22:47:09.790811] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:11.634 [2024-07-24 22:47:09.790827] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:11.634 [2024-07-24 22:47:09.790901] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:11.634 [2024-07-24 22:47:09.790917] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:11.634 #16 NEW cov: 12261 ft: 13263 corp: 5/166b lim: 50 exec/s: 0 rss: 70Mb L: 39/42 MS: 5 InsertByte-ChangeByte-CopyPart-CrossOver-InsertRepeatedBytes- 00:08:11.893 [2024-07-24 22:47:09.841275] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:11.893 [2024-07-24 22:47:09.841302] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:11.893 [2024-07-24 22:47:09.841387] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:11.893 [2024-07-24 22:47:09.841402] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:11.893 [2024-07-24 22:47:09.841489] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:11.893 [2024-07-24 22:47:09.841505] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:11.893 [2024-07-24 22:47:09.841593] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:11.893 [2024-07-24 22:47:09.841607] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:11.893 #17 NEW cov: 12261 ft: 13414 corp: 6/209b lim: 50 exec/s: 0 rss: 70Mb L: 43/43 MS: 1 InsertByte- 00:08:11.893 [2024-07-24 22:47:09.891516] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:11.893 [2024-07-24 22:47:09.891540] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:11.893 [2024-07-24 22:47:09.891623] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:11.893 [2024-07-24 22:47:09.891635] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:11.893 [2024-07-24 22:47:09.891727] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:11.893 [2024-07-24 22:47:09.891737] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:11.893 [2024-07-24 22:47:09.891825] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:11.893 [2024-07-24 22:47:09.891839] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:11.893 #18 NEW cov: 12261 ft: 13535 corp: 7/251b lim: 50 exec/s: 0 rss: 70Mb L: 42/43 MS: 1 ChangeBinInt- 00:08:11.893 [2024-07-24 22:47:09.941848] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:11.893 [2024-07-24 22:47:09.941873] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:11.893 [2024-07-24 22:47:09.941970] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:11.893 [2024-07-24 22:47:09.941988] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:11.893 [2024-07-24 22:47:09.942091] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:11.893 [2024-07-24 22:47:09.942102] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:11.893 [2024-07-24 22:47:09.942188] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:11.893 [2024-07-24 22:47:09.942207] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:11.893 #19 NEW cov: 12261 ft: 13598 corp: 8/293b lim: 50 exec/s: 0 rss: 70Mb L: 42/43 MS: 1 ChangeByte- 00:08:11.893 [2024-07-24 22:47:10.012678] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:11.893 [2024-07-24 22:47:10.012704] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:11.893 [2024-07-24 22:47:10.012802] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:11.893 [2024-07-24 22:47:10.012818] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:11.893 [2024-07-24 22:47:10.012904] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:11.893 [2024-07-24 22:47:10.012921] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:11.893 [2024-07-24 22:47:10.013015] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:11.893 [2024-07-24 22:47:10.013032] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:11.893 [2024-07-24 22:47:10.013134] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:4 nsid:0 00:08:11.893 [2024-07-24 22:47:10.013153] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:08:11.893 #20 NEW cov: 12261 ft: 13653 corp: 9/343b lim: 50 exec/s: 0 rss: 71Mb L: 50/50 MS: 1 CMP- DE: "@\000\000\000\000\000\000\000"- 00:08:11.893 [2024-07-24 22:47:10.083005] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:11.893 [2024-07-24 22:47:10.083037] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:11.893 [2024-07-24 22:47:10.083116] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:11.893 [2024-07-24 22:47:10.083133] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:11.893 [2024-07-24 22:47:10.083216] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:11.893 [2024-07-24 22:47:10.083234] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:11.893 [2024-07-24 22:47:10.083325] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:11.893 [2024-07-24 22:47:10.083349] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:12.152 #21 NEW cov: 12261 ft: 13676 corp: 10/391b lim: 50 exec/s: 0 rss: 71Mb L: 48/50 MS: 1 CopyPart- 00:08:12.152 [2024-07-24 22:47:10.133113] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:12.152 [2024-07-24 22:47:10.133141] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:12.152 [2024-07-24 22:47:10.133213] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:12.152 [2024-07-24 22:47:10.133230] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:12.152 [2024-07-24 22:47:10.133313] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:12.152 [2024-07-24 22:47:10.133329] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:12.152 #22 NEW cov: 12261 ft: 13734 corp: 11/430b lim: 50 exec/s: 0 rss: 71Mb L: 39/50 MS: 1 PersAutoDict- DE: "@\000\000\000\000\000\000\000"- 00:08:12.152 [2024-07-24 22:47:10.202935] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:12.152 [2024-07-24 22:47:10.202960] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:12.152 #24 NEW cov: 12261 ft: 14588 corp: 12/442b lim: 50 exec/s: 0 rss: 71Mb L: 12/50 MS: 2 ChangeBit-CrossOver- 00:08:12.152 [2024-07-24 22:47:10.264800] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:12.152 [2024-07-24 22:47:10.264828] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:12.152 [2024-07-24 22:47:10.264916] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:12.152 [2024-07-24 22:47:10.264934] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:12.152 [2024-07-24 22:47:10.265017] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:12.152 [2024-07-24 22:47:10.265031] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:12.152 [2024-07-24 22:47:10.265122] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:12.152 [2024-07-24 22:47:10.265138] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:12.152 [2024-07-24 22:47:10.265223] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:4 nsid:0 00:08:12.152 [2024-07-24 22:47:10.265240] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:08:12.152 #25 NEW cov: 12261 ft: 14629 corp: 13/492b lim: 50 exec/s: 0 rss: 71Mb L: 50/50 MS: 1 CMP- DE: "\377\377\377\004"- 00:08:12.152 [2024-07-24 22:47:10.334756] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:12.152 [2024-07-24 22:47:10.334782] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:12.152 [2024-07-24 22:47:10.334877] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:12.152 [2024-07-24 22:47:10.334894] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:12.152 [2024-07-24 22:47:10.334981] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:12.152 [2024-07-24 22:47:10.334998] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:12.152 [2024-07-24 22:47:10.335085] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:12.152 [2024-07-24 22:47:10.335103] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:12.411 NEW_FUNC[1/1]: 0x1a887d0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:08:12.411 #26 NEW cov: 12284 ft: 14679 corp: 14/535b lim: 50 exec/s: 0 rss: 71Mb L: 43/50 MS: 1 CrossOver- 00:08:12.411 [2024-07-24 22:47:10.403968] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:12.411 [2024-07-24 22:47:10.403996] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:12.412 #30 NEW cov: 12284 ft: 14731 corp: 15/550b lim: 50 exec/s: 0 rss: 71Mb L: 15/50 MS: 4 InsertByte-ChangeBit-ChangeBit-InsertRepeatedBytes- 00:08:12.412 [2024-07-24 22:47:10.455437] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:12.412 [2024-07-24 22:47:10.455463] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:12.412 [2024-07-24 22:47:10.455545] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:12.412 [2024-07-24 22:47:10.455562] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:12.412 [2024-07-24 22:47:10.455647] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:12.412 [2024-07-24 22:47:10.455659] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:12.412 [2024-07-24 22:47:10.455743] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:12.412 [2024-07-24 22:47:10.455757] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:12.412 #31 NEW cov: 12284 ft: 14753 corp: 16/593b lim: 50 exec/s: 31 rss: 71Mb L: 43/50 MS: 1 InsertRepeatedBytes- 00:08:12.412 [2024-07-24 22:47:10.505886] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:12.412 [2024-07-24 22:47:10.505911] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:12.412 [2024-07-24 22:47:10.505998] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:12.412 [2024-07-24 22:47:10.506013] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:12.412 [2024-07-24 22:47:10.506106] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:12.412 [2024-07-24 22:47:10.506117] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:12.412 [2024-07-24 22:47:10.506202] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:12.412 [2024-07-24 22:47:10.506216] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:12.412 #32 NEW cov: 12284 ft: 14794 corp: 17/636b lim: 50 exec/s: 32 rss: 71Mb L: 43/50 MS: 1 ChangeBinInt- 00:08:12.412 [2024-07-24 22:47:10.566101] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:12.412 [2024-07-24 22:47:10.566129] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:12.412 [2024-07-24 22:47:10.566216] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:12.412 [2024-07-24 22:47:10.566231] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:12.412 [2024-07-24 22:47:10.566319] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:12.412 [2024-07-24 22:47:10.566330] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:12.412 [2024-07-24 22:47:10.566418] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:12.412 [2024-07-24 22:47:10.566432] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:12.412 #33 NEW cov: 12284 ft: 14804 corp: 18/678b lim: 50 exec/s: 33 rss: 71Mb L: 42/50 MS: 1 ChangeBinInt- 00:08:12.412 [2024-07-24 22:47:10.616278] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:12.412 [2024-07-24 22:47:10.616300] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:12.412 [2024-07-24 22:47:10.616400] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:12.412 [2024-07-24 22:47:10.616418] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:12.412 [2024-07-24 22:47:10.616507] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:12.412 [2024-07-24 22:47:10.616517] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:12.412 [2024-07-24 22:47:10.616613] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:12.412 [2024-07-24 22:47:10.616629] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:12.671 #34 NEW cov: 12284 ft: 14830 corp: 19/726b lim: 50 exec/s: 34 rss: 72Mb L: 48/50 MS: 1 CrossOver- 00:08:12.671 [2024-07-24 22:47:10.675855] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:12.671 [2024-07-24 22:47:10.675881] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:12.671 [2024-07-24 22:47:10.675942] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:12.671 [2024-07-24 22:47:10.675961] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:12.671 #35 NEW cov: 12284 ft: 15122 corp: 20/755b lim: 50 exec/s: 35 rss: 72Mb L: 29/50 MS: 1 EraseBytes- 00:08:12.671 [2024-07-24 22:47:10.737125] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:12.671 [2024-07-24 22:47:10.737150] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:12.671 [2024-07-24 22:47:10.737261] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:12.671 [2024-07-24 22:47:10.737275] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:12.671 [2024-07-24 22:47:10.737363] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:12.671 [2024-07-24 22:47:10.737373] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:12.671 [2024-07-24 22:47:10.737467] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:12.671 [2024-07-24 22:47:10.737480] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:12.671 [2024-07-24 22:47:10.737571] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:4 nsid:0 00:08:12.671 [2024-07-24 22:47:10.737585] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:08:12.671 #36 NEW cov: 12284 ft: 15141 corp: 21/805b lim: 50 exec/s: 36 rss: 72Mb L: 50/50 MS: 1 PersAutoDict- DE: "@\000\000\000\000\000\000\000"- 00:08:12.671 [2024-07-24 22:47:10.787105] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:12.671 [2024-07-24 22:47:10.787130] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:12.671 [2024-07-24 22:47:10.787218] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:12.671 [2024-07-24 22:47:10.787235] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:12.671 [2024-07-24 22:47:10.787328] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:12.671 [2024-07-24 22:47:10.787339] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:12.671 [2024-07-24 22:47:10.787425] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:12.671 [2024-07-24 22:47:10.787440] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:12.671 #37 NEW cov: 12284 ft: 15152 corp: 22/847b lim: 50 exec/s: 37 rss: 72Mb L: 42/50 MS: 1 CrossOver- 00:08:12.671 [2024-07-24 22:47:10.837304] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:12.671 [2024-07-24 22:47:10.837328] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:12.671 [2024-07-24 22:47:10.837420] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:12.671 [2024-07-24 22:47:10.837437] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:12.671 [2024-07-24 22:47:10.837527] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:12.671 [2024-07-24 22:47:10.837541] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:12.671 [2024-07-24 22:47:10.837630] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:12.671 [2024-07-24 22:47:10.837651] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:12.671 #38 NEW cov: 12284 ft: 15176 corp: 23/890b lim: 50 exec/s: 38 rss: 72Mb L: 43/50 MS: 1 PersAutoDict- DE: "\377\377\377\004"- 00:08:12.930 [2024-07-24 22:47:10.897749] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:12.930 [2024-07-24 22:47:10.897776] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:12.930 [2024-07-24 22:47:10.897859] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:12.931 [2024-07-24 22:47:10.897877] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:12.931 [2024-07-24 22:47:10.897967] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:12.931 [2024-07-24 22:47:10.897978] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:12.931 [2024-07-24 22:47:10.898064] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:12.931 [2024-07-24 22:47:10.898078] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:12.931 #39 NEW cov: 12284 ft: 15202 corp: 24/931b lim: 50 exec/s: 39 rss: 72Mb L: 41/50 MS: 1 EraseBytes- 00:08:12.931 [2024-07-24 22:47:10.948010] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:12.931 [2024-07-24 22:47:10.948035] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:12.931 [2024-07-24 22:47:10.948182] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:12.931 [2024-07-24 22:47:10.948198] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:12.931 [2024-07-24 22:47:10.948285] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:12.931 [2024-07-24 22:47:10.948300] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:12.931 [2024-07-24 22:47:10.948380] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:12.931 [2024-07-24 22:47:10.948397] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:12.931 #40 NEW cov: 12284 ft: 15215 corp: 25/974b lim: 50 exec/s: 40 rss: 72Mb L: 43/50 MS: 1 CopyPart- 00:08:12.931 [2024-07-24 22:47:10.998175] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:12.931 [2024-07-24 22:47:10.998198] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:12.931 [2024-07-24 22:47:10.998310] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:12.931 [2024-07-24 22:47:10.998327] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:12.931 [2024-07-24 22:47:10.998411] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:12.931 [2024-07-24 22:47:10.998422] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:12.931 [2024-07-24 22:47:10.998502] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:12.931 [2024-07-24 22:47:10.998516] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:12.931 #41 NEW cov: 12284 ft: 15237 corp: 26/1017b lim: 50 exec/s: 41 rss: 72Mb L: 43/50 MS: 1 CopyPart- 00:08:12.931 [2024-07-24 22:47:11.058722] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:12.931 [2024-07-24 22:47:11.058748] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:12.931 [2024-07-24 22:47:11.058848] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:12.931 [2024-07-24 22:47:11.058864] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:12.931 [2024-07-24 22:47:11.058941] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:12.931 [2024-07-24 22:47:11.058955] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:12.931 [2024-07-24 22:47:11.059049] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:12.931 [2024-07-24 22:47:11.059065] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:12.931 #42 NEW cov: 12284 ft: 15302 corp: 27/1059b lim: 50 exec/s: 42 rss: 72Mb L: 42/50 MS: 1 ChangeBinInt- 00:08:12.931 [2024-07-24 22:47:11.109142] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:12.931 [2024-07-24 22:47:11.109172] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:12.931 [2024-07-24 22:47:11.109288] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:12.931 [2024-07-24 22:47:11.109303] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:12.931 [2024-07-24 22:47:11.109387] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:12.931 [2024-07-24 22:47:11.109398] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:12.931 [2024-07-24 22:47:11.109488] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:12.931 [2024-07-24 22:47:11.109502] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:12.931 #43 NEW cov: 12284 ft: 15313 corp: 28/1102b lim: 50 exec/s: 43 rss: 72Mb L: 43/50 MS: 1 CMP- DE: "\377\377\377\003"- 00:08:13.190 [2024-07-24 22:47:11.159700] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:13.190 [2024-07-24 22:47:11.159725] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:13.190 [2024-07-24 22:47:11.159809] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:13.190 [2024-07-24 22:47:11.159823] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:13.190 [2024-07-24 22:47:11.159916] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:13.190 [2024-07-24 22:47:11.159929] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:13.190 [2024-07-24 22:47:11.160018] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:13.190 [2024-07-24 22:47:11.160034] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:13.190 #44 NEW cov: 12284 ft: 15317 corp: 29/1149b lim: 50 exec/s: 44 rss: 72Mb L: 47/50 MS: 1 CopyPart- 00:08:13.190 [2024-07-24 22:47:11.210351] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:13.190 [2024-07-24 22:47:11.210376] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:13.190 [2024-07-24 22:47:11.210446] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:13.190 [2024-07-24 22:47:11.210459] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:13.190 [2024-07-24 22:47:11.210549] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:13.190 [2024-07-24 22:47:11.210559] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:13.190 [2024-07-24 22:47:11.210644] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:13.190 [2024-07-24 22:47:11.210661] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:13.190 #45 NEW cov: 12284 ft: 15352 corp: 30/1198b lim: 50 exec/s: 45 rss: 72Mb L: 49/50 MS: 1 CrossOver- 00:08:13.190 [2024-07-24 22:47:11.270565] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:13.190 [2024-07-24 22:47:11.270591] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:13.190 [2024-07-24 22:47:11.270705] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:13.190 [2024-07-24 22:47:11.270722] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:13.190 [2024-07-24 22:47:11.270806] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:13.190 [2024-07-24 22:47:11.270816] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:13.190 [2024-07-24 22:47:11.270914] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:13.190 [2024-07-24 22:47:11.270932] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:13.190 #46 NEW cov: 12284 ft: 15369 corp: 31/1241b lim: 50 exec/s: 46 rss: 72Mb L: 43/50 MS: 1 CMP- DE: "\000\016"- 00:08:13.190 [2024-07-24 22:47:11.331140] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:13.190 [2024-07-24 22:47:11.331166] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:13.190 [2024-07-24 22:47:11.331255] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:13.190 [2024-07-24 22:47:11.331273] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:13.190 [2024-07-24 22:47:11.331362] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:13.191 [2024-07-24 22:47:11.331377] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:13.191 [2024-07-24 22:47:11.331468] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:13.191 [2024-07-24 22:47:11.331483] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:13.191 #47 NEW cov: 12284 ft: 15382 corp: 32/1289b lim: 50 exec/s: 47 rss: 72Mb L: 48/50 MS: 1 InsertRepeatedBytes- 00:08:13.191 [2024-07-24 22:47:11.381635] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:13.191 [2024-07-24 22:47:11.381658] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:13.191 [2024-07-24 22:47:11.381750] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:13.191 [2024-07-24 22:47:11.381768] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:13.191 [2024-07-24 22:47:11.381866] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:13.191 [2024-07-24 22:47:11.381876] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:13.191 [2024-07-24 22:47:11.381968] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:13.191 [2024-07-24 22:47:11.381981] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:13.191 [2024-07-24 22:47:11.382068] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:4 nsid:0 00:08:13.191 [2024-07-24 22:47:11.382089] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:08:13.449 #48 NEW cov: 12284 ft: 15422 corp: 33/1339b lim: 50 exec/s: 48 rss: 72Mb L: 50/50 MS: 1 PersAutoDict- DE: "@\000\000\000\000\000\000\000"- 00:08:13.449 [2024-07-24 22:47:11.440453] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:13.450 [2024-07-24 22:47:11.440481] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:13.450 #49 NEW cov: 12284 ft: 15506 corp: 34/1350b lim: 50 exec/s: 24 rss: 72Mb L: 11/50 MS: 1 EraseBytes- 00:08:13.450 #49 DONE cov: 12284 ft: 15506 corp: 34/1350b lim: 50 exec/s: 24 rss: 72Mb 00:08:13.450 ###### Recommended dictionary. ###### 00:08:13.450 "@\000\000\000\000\000\000\000" # Uses: 3 00:08:13.450 "\377\377\377\004" # Uses: 1 00:08:13.450 "\377\377\377\003" # Uses: 0 00:08:13.450 "\000\016" # Uses: 0 00:08:13.450 ###### End of recommended dictionary. ###### 00:08:13.450 Done 49 runs in 2 second(s) 00:08:13.450 22:47:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_21.conf /var/tmp/suppress_nvmf_fuzz 00:08:13.450 22:47:11 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:13.450 22:47:11 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:13.450 22:47:11 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 22 1 0x1 00:08:13.450 22:47:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=22 00:08:13.450 22:47:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:13.450 22:47:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:13.450 22:47:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 00:08:13.450 22:47:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_22.conf 00:08:13.450 22:47:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:13.450 22:47:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:13.450 22:47:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 22 00:08:13.450 22:47:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4422 00:08:13.450 22:47:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 00:08:13.450 22:47:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4422' 00:08:13.450 22:47:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4422"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:13.450 22:47:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:13.450 22:47:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:13.450 22:47:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4422' -c /tmp/fuzz_json_22.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 -Z 22 00:08:13.450 [2024-07-24 22:47:11.637581] Starting SPDK v24.09-pre git sha1 f41dbc235 / DPDK 24.03.0 initialization... 00:08:13.450 [2024-07-24 22:47:11.637638] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid487841 ] 00:08:13.708 EAL: No free 2048 kB hugepages reported on node 1 00:08:13.708 [2024-07-24 22:47:11.888227] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:13.967 [2024-07-24 22:47:11.971721] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:13.967 [2024-07-24 22:47:12.030129] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:13.967 [2024-07-24 22:47:12.046374] tcp.c:1007:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4422 *** 00:08:13.967 INFO: Running with entropic power schedule (0xFF, 100). 00:08:13.967 INFO: Seed: 2416835174 00:08:13.967 INFO: Loaded 1 modules (358985 inline 8-bit counters): 358985 [0x29c620c, 0x2a1dc55), 00:08:13.967 INFO: Loaded 1 PC tables (358985 PCs): 358985 [0x2a1dc58,0x2f980e8), 00:08:13.967 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 00:08:13.967 INFO: A corpus is not provided, starting from an empty corpus 00:08:13.967 #2 INITED exec/s: 0 rss: 65Mb 00:08:13.967 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:13.967 This may also happen if the target rejected all inputs we tried so far 00:08:13.967 [2024-07-24 22:47:12.095709] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:13.967 [2024-07-24 22:47:12.095743] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:13.967 [2024-07-24 22:47:12.095807] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:13.967 [2024-07-24 22:47:12.095824] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:14.226 NEW_FUNC[1/702]: 0x4ab610 in fuzz_nvm_reservation_register_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:644 00:08:14.226 NEW_FUNC[2/702]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:14.226 #3 NEW cov: 12083 ft: 12077 corp: 2/37b lim: 85 exec/s: 0 rss: 71Mb L: 36/36 MS: 1 InsertRepeatedBytes- 00:08:14.226 [2024-07-24 22:47:12.246136] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:14.226 [2024-07-24 22:47:12.246189] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:14.226 [2024-07-24 22:47:12.246264] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:14.226 [2024-07-24 22:47:12.246287] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:14.226 #14 NEW cov: 12196 ft: 12723 corp: 3/73b lim: 85 exec/s: 0 rss: 71Mb L: 36/36 MS: 1 ShuffleBytes- 00:08:14.226 [2024-07-24 22:47:12.305975] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:14.226 [2024-07-24 22:47:12.306001] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:14.226 [2024-07-24 22:47:12.306039] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:14.226 [2024-07-24 22:47:12.306052] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:14.226 #15 NEW cov: 12202 ft: 12844 corp: 4/107b lim: 85 exec/s: 0 rss: 71Mb L: 34/36 MS: 1 EraseBytes- 00:08:14.226 [2024-07-24 22:47:12.356102] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:14.226 [2024-07-24 22:47:12.356129] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:14.226 [2024-07-24 22:47:12.356166] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:14.226 [2024-07-24 22:47:12.356179] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:14.226 #16 NEW cov: 12287 ft: 13094 corp: 5/143b lim: 85 exec/s: 0 rss: 71Mb L: 36/36 MS: 1 CopyPart- 00:08:14.226 [2024-07-24 22:47:12.396399] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:14.226 [2024-07-24 22:47:12.396425] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:14.226 [2024-07-24 22:47:12.396472] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:14.226 [2024-07-24 22:47:12.396485] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:14.226 [2024-07-24 22:47:12.396537] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:14.226 [2024-07-24 22:47:12.396550] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:14.226 #20 NEW cov: 12287 ft: 13508 corp: 6/200b lim: 85 exec/s: 0 rss: 72Mb L: 57/57 MS: 4 InsertByte-ChangeBinInt-InsertRepeatedBytes-InsertRepeatedBytes- 00:08:14.485 [2024-07-24 22:47:12.436498] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:14.485 [2024-07-24 22:47:12.436522] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:14.485 [2024-07-24 22:47:12.436570] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:14.485 [2024-07-24 22:47:12.436581] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:14.485 [2024-07-24 22:47:12.436630] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:14.485 [2024-07-24 22:47:12.436643] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:14.485 #21 NEW cov: 12287 ft: 13605 corp: 7/257b lim: 85 exec/s: 0 rss: 72Mb L: 57/57 MS: 1 ChangeBit- 00:08:14.485 [2024-07-24 22:47:12.486352] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:14.485 [2024-07-24 22:47:12.486377] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:14.485 #22 NEW cov: 12287 ft: 14564 corp: 8/286b lim: 85 exec/s: 0 rss: 72Mb L: 29/57 MS: 1 EraseBytes- 00:08:14.485 [2024-07-24 22:47:12.526775] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:14.485 [2024-07-24 22:47:12.526799] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:14.485 [2024-07-24 22:47:12.526846] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:14.485 [2024-07-24 22:47:12.526859] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:14.485 [2024-07-24 22:47:12.526909] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:14.485 [2024-07-24 22:47:12.526937] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:14.485 #23 NEW cov: 12287 ft: 14596 corp: 9/343b lim: 85 exec/s: 0 rss: 72Mb L: 57/57 MS: 1 ChangeBinInt- 00:08:14.485 [2024-07-24 22:47:12.566907] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:14.485 [2024-07-24 22:47:12.566935] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:14.485 [2024-07-24 22:47:12.566971] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:14.485 [2024-07-24 22:47:12.566984] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:14.485 [2024-07-24 22:47:12.567034] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:14.485 [2024-07-24 22:47:12.567046] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:14.485 #24 NEW cov: 12287 ft: 14623 corp: 10/400b lim: 85 exec/s: 0 rss: 72Mb L: 57/57 MS: 1 CMP- DE: "\000\000\002\000"- 00:08:14.485 [2024-07-24 22:47:12.616916] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:14.485 [2024-07-24 22:47:12.616942] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:14.485 [2024-07-24 22:47:12.616978] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:14.485 [2024-07-24 22:47:12.616991] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:14.485 #25 NEW cov: 12287 ft: 14655 corp: 11/436b lim: 85 exec/s: 0 rss: 72Mb L: 36/57 MS: 1 ChangeBinInt- 00:08:14.485 [2024-07-24 22:47:12.657102] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:14.485 [2024-07-24 22:47:12.657128] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:14.485 [2024-07-24 22:47:12.657179] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:14.485 [2024-07-24 22:47:12.657191] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:14.485 [2024-07-24 22:47:12.657242] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:14.485 [2024-07-24 22:47:12.657255] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:14.485 #26 NEW cov: 12287 ft: 14733 corp: 12/494b lim: 85 exec/s: 0 rss: 72Mb L: 58/58 MS: 1 InsertByte- 00:08:14.744 [2024-07-24 22:47:12.697090] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:14.744 [2024-07-24 22:47:12.697133] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:14.744 [2024-07-24 22:47:12.697175] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:14.744 [2024-07-24 22:47:12.697188] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:14.744 #27 NEW cov: 12287 ft: 14755 corp: 13/530b lim: 85 exec/s: 0 rss: 72Mb L: 36/58 MS: 1 CopyPart- 00:08:14.744 [2024-07-24 22:47:12.747060] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:14.744 [2024-07-24 22:47:12.747090] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:14.744 #29 NEW cov: 12287 ft: 14839 corp: 14/559b lim: 85 exec/s: 0 rss: 72Mb L: 29/58 MS: 2 CopyPart-InsertRepeatedBytes- 00:08:14.744 [2024-07-24 22:47:12.787608] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:14.744 [2024-07-24 22:47:12.787634] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:14.744 [2024-07-24 22:47:12.787683] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:14.744 [2024-07-24 22:47:12.787698] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:14.744 [2024-07-24 22:47:12.787747] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:14.744 [2024-07-24 22:47:12.787759] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:14.744 [2024-07-24 22:47:12.787809] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:08:14.744 [2024-07-24 22:47:12.787823] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:14.744 #30 NEW cov: 12287 ft: 15186 corp: 15/642b lim: 85 exec/s: 0 rss: 72Mb L: 83/83 MS: 1 InsertRepeatedBytes- 00:08:14.744 [2024-07-24 22:47:12.827592] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:14.744 [2024-07-24 22:47:12.827618] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:14.744 [2024-07-24 22:47:12.827660] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:14.744 [2024-07-24 22:47:12.827672] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:14.744 [2024-07-24 22:47:12.827724] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:14.744 [2024-07-24 22:47:12.827737] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:14.744 #31 NEW cov: 12287 ft: 15206 corp: 16/699b lim: 85 exec/s: 0 rss: 72Mb L: 57/83 MS: 1 ShuffleBytes- 00:08:14.744 [2024-07-24 22:47:12.867908] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:14.744 [2024-07-24 22:47:12.867933] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:14.744 [2024-07-24 22:47:12.867980] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:14.744 [2024-07-24 22:47:12.867993] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:14.744 [2024-07-24 22:47:12.868044] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:14.744 [2024-07-24 22:47:12.868056] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:14.744 [2024-07-24 22:47:12.868122] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:08:14.744 [2024-07-24 22:47:12.868137] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:14.744 #32 NEW cov: 12287 ft: 15216 corp: 17/782b lim: 85 exec/s: 0 rss: 72Mb L: 83/83 MS: 1 CopyPart- 00:08:14.744 [2024-07-24 22:47:12.917871] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:14.744 [2024-07-24 22:47:12.917897] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:14.744 [2024-07-24 22:47:12.917942] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:14.744 [2024-07-24 22:47:12.917955] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:14.744 [2024-07-24 22:47:12.918006] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:14.744 [2024-07-24 22:47:12.918020] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:14.744 #33 NEW cov: 12287 ft: 15238 corp: 18/839b lim: 85 exec/s: 0 rss: 72Mb L: 57/83 MS: 1 CopyPart- 00:08:15.003 [2024-07-24 22:47:12.957963] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:15.003 [2024-07-24 22:47:12.957989] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:15.003 [2024-07-24 22:47:12.958033] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:15.003 [2024-07-24 22:47:12.958047] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:15.003 [2024-07-24 22:47:12.958102] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:15.003 [2024-07-24 22:47:12.958116] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:15.003 NEW_FUNC[1/1]: 0x1a887d0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:08:15.003 #34 NEW cov: 12310 ft: 15253 corp: 19/897b lim: 85 exec/s: 0 rss: 72Mb L: 58/83 MS: 1 InsertByte- 00:08:15.003 [2024-07-24 22:47:12.998301] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:15.003 [2024-07-24 22:47:12.998326] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:15.003 [2024-07-24 22:47:12.998378] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:15.003 [2024-07-24 22:47:12.998390] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:15.003 [2024-07-24 22:47:12.998441] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:15.003 [2024-07-24 22:47:12.998455] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:15.003 [2024-07-24 22:47:12.998506] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:08:15.003 [2024-07-24 22:47:12.998517] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:15.003 #35 NEW cov: 12310 ft: 15282 corp: 20/966b lim: 85 exec/s: 0 rss: 72Mb L: 69/83 MS: 1 InsertRepeatedBytes- 00:08:15.003 [2024-07-24 22:47:13.048268] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:15.003 [2024-07-24 22:47:13.048293] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:15.003 [2024-07-24 22:47:13.048336] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:15.003 [2024-07-24 22:47:13.048348] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:15.003 [2024-07-24 22:47:13.048399] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:15.003 [2024-07-24 22:47:13.048412] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:15.003 #36 NEW cov: 12310 ft: 15285 corp: 21/1023b lim: 85 exec/s: 0 rss: 72Mb L: 57/83 MS: 1 ShuffleBytes- 00:08:15.003 [2024-07-24 22:47:13.088346] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:15.003 [2024-07-24 22:47:13.088372] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:15.003 [2024-07-24 22:47:13.088418] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:15.003 [2024-07-24 22:47:13.088433] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:15.003 [2024-07-24 22:47:13.088487] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:15.003 [2024-07-24 22:47:13.088502] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:15.003 #37 NEW cov: 12310 ft: 15324 corp: 22/1080b lim: 85 exec/s: 37 rss: 72Mb L: 57/83 MS: 1 CMP- DE: "\262) 0K\177\000\000"- 00:08:15.003 [2024-07-24 22:47:13.138191] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:15.003 [2024-07-24 22:47:13.138217] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:15.003 #38 NEW cov: 12310 ft: 15331 corp: 23/1109b lim: 85 exec/s: 38 rss: 72Mb L: 29/83 MS: 1 CrossOver- 00:08:15.003 [2024-07-24 22:47:13.188470] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:15.003 [2024-07-24 22:47:13.188495] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:15.003 [2024-07-24 22:47:13.188534] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:15.003 [2024-07-24 22:47:13.188547] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:15.261 #39 NEW cov: 12310 ft: 15399 corp: 24/1146b lim: 85 exec/s: 39 rss: 72Mb L: 37/83 MS: 1 InsertByte- 00:08:15.261 [2024-07-24 22:47:13.238766] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:15.261 [2024-07-24 22:47:13.238790] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:15.261 [2024-07-24 22:47:13.238834] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:15.261 [2024-07-24 22:47:13.238847] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:15.261 [2024-07-24 22:47:13.238898] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:15.261 [2024-07-24 22:47:13.238910] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:15.261 #40 NEW cov: 12310 ft: 15448 corp: 25/1211b lim: 85 exec/s: 40 rss: 72Mb L: 65/83 MS: 1 PersAutoDict- DE: "\262) 0K\177\000\000"- 00:08:15.261 [2024-07-24 22:47:13.278724] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:15.261 [2024-07-24 22:47:13.278749] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:15.261 [2024-07-24 22:47:13.278789] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:15.261 [2024-07-24 22:47:13.278802] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:15.261 #41 NEW cov: 12310 ft: 15467 corp: 26/1248b lim: 85 exec/s: 41 rss: 73Mb L: 37/83 MS: 1 ChangeBinInt- 00:08:15.261 [2024-07-24 22:47:13.329026] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:15.261 [2024-07-24 22:47:13.329050] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:15.261 [2024-07-24 22:47:13.329101] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:15.261 [2024-07-24 22:47:13.329114] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:15.261 [2024-07-24 22:47:13.329167] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:15.261 [2024-07-24 22:47:13.329180] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:15.261 #42 NEW cov: 12310 ft: 15477 corp: 27/1306b lim: 85 exec/s: 42 rss: 73Mb L: 58/83 MS: 1 CrossOver- 00:08:15.262 [2024-07-24 22:47:13.378990] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:15.262 [2024-07-24 22:47:13.379014] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:15.262 [2024-07-24 22:47:13.379052] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:15.262 [2024-07-24 22:47:13.379065] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:15.262 #43 NEW cov: 12310 ft: 15495 corp: 28/1343b lim: 85 exec/s: 43 rss: 73Mb L: 37/83 MS: 1 ShuffleBytes- 00:08:15.262 [2024-07-24 22:47:13.419143] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:15.262 [2024-07-24 22:47:13.419167] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:15.262 [2024-07-24 22:47:13.419204] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:15.262 [2024-07-24 22:47:13.419218] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:15.262 #44 NEW cov: 12310 ft: 15504 corp: 29/1379b lim: 85 exec/s: 44 rss: 73Mb L: 36/83 MS: 1 ChangeByte- 00:08:15.262 [2024-07-24 22:47:13.459529] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:15.262 [2024-07-24 22:47:13.459554] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:15.262 [2024-07-24 22:47:13.459604] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:15.262 [2024-07-24 22:47:13.459616] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:15.262 [2024-07-24 22:47:13.459665] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:15.262 [2024-07-24 22:47:13.459677] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:15.262 [2024-07-24 22:47:13.459728] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:08:15.262 [2024-07-24 22:47:13.459740] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:15.520 #45 NEW cov: 12310 ft: 15507 corp: 30/1448b lim: 85 exec/s: 45 rss: 73Mb L: 69/83 MS: 1 CMP- DE: "\004\000"- 00:08:15.520 [2024-07-24 22:47:13.509693] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:15.520 [2024-07-24 22:47:13.509718] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:15.520 [2024-07-24 22:47:13.509767] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:15.520 [2024-07-24 22:47:13.509779] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:15.520 [2024-07-24 22:47:13.509831] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:15.520 [2024-07-24 22:47:13.509843] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:15.520 [2024-07-24 22:47:13.509896] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:08:15.520 [2024-07-24 22:47:13.509908] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:15.521 #46 NEW cov: 12310 ft: 15531 corp: 31/1517b lim: 85 exec/s: 46 rss: 73Mb L: 69/83 MS: 1 CopyPart- 00:08:15.521 [2024-07-24 22:47:13.549336] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:15.521 [2024-07-24 22:47:13.549364] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:15.521 #47 NEW cov: 12310 ft: 15591 corp: 32/1546b lim: 85 exec/s: 47 rss: 73Mb L: 29/83 MS: 1 ChangeBinInt- 00:08:15.521 [2024-07-24 22:47:13.599802] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:15.521 [2024-07-24 22:47:13.599828] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:15.521 [2024-07-24 22:47:13.599869] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:15.521 [2024-07-24 22:47:13.599882] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:15.521 [2024-07-24 22:47:13.599933] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:15.521 [2024-07-24 22:47:13.599945] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:15.521 #48 NEW cov: 12310 ft: 15604 corp: 33/1603b lim: 85 exec/s: 48 rss: 73Mb L: 57/83 MS: 1 ChangeBinInt- 00:08:15.521 [2024-07-24 22:47:13.649942] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:15.521 [2024-07-24 22:47:13.649968] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:15.521 [2024-07-24 22:47:13.650012] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:15.521 [2024-07-24 22:47:13.650025] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:15.521 [2024-07-24 22:47:13.650077] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:15.521 [2024-07-24 22:47:13.650090] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:15.521 #49 NEW cov: 12310 ft: 15616 corp: 34/1661b lim: 85 exec/s: 49 rss: 73Mb L: 58/83 MS: 1 ChangeBit- 00:08:15.521 [2024-07-24 22:47:13.700087] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:15.521 [2024-07-24 22:47:13.700111] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:15.521 [2024-07-24 22:47:13.700161] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:15.521 [2024-07-24 22:47:13.700173] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:15.521 [2024-07-24 22:47:13.700224] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:15.521 [2024-07-24 22:47:13.700237] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:15.521 #50 NEW cov: 12310 ft: 15621 corp: 35/1718b lim: 85 exec/s: 50 rss: 73Mb L: 57/83 MS: 1 ChangeByte- 00:08:15.781 [2024-07-24 22:47:13.740176] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:15.781 [2024-07-24 22:47:13.740200] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:15.781 [2024-07-24 22:47:13.740252] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:15.781 [2024-07-24 22:47:13.740263] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:15.781 [2024-07-24 22:47:13.740310] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:15.781 [2024-07-24 22:47:13.740323] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:15.781 #51 NEW cov: 12310 ft: 15639 corp: 36/1777b lim: 85 exec/s: 51 rss: 74Mb L: 59/83 MS: 1 InsertByte- 00:08:15.781 [2024-07-24 22:47:13.790322] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:15.781 [2024-07-24 22:47:13.790346] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:15.781 [2024-07-24 22:47:13.790393] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:15.781 [2024-07-24 22:47:13.790405] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:15.781 [2024-07-24 22:47:13.790457] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:15.781 [2024-07-24 22:47:13.790469] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:15.781 #52 NEW cov: 12310 ft: 15640 corp: 37/1834b lim: 85 exec/s: 52 rss: 74Mb L: 57/83 MS: 1 ChangeBit- 00:08:15.781 [2024-07-24 22:47:13.840611] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:15.781 [2024-07-24 22:47:13.840636] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:15.781 [2024-07-24 22:47:13.840686] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:15.781 [2024-07-24 22:47:13.840699] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:15.781 [2024-07-24 22:47:13.840749] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:15.781 [2024-07-24 22:47:13.840763] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:15.781 [2024-07-24 22:47:13.840814] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:08:15.781 [2024-07-24 22:47:13.840826] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:15.781 #53 NEW cov: 12310 ft: 15656 corp: 38/1903b lim: 85 exec/s: 53 rss: 74Mb L: 69/83 MS: 1 InsertRepeatedBytes- 00:08:15.781 [2024-07-24 22:47:13.890607] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:15.781 [2024-07-24 22:47:13.890631] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:15.781 [2024-07-24 22:47:13.890680] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:15.781 [2024-07-24 22:47:13.890691] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:15.781 [2024-07-24 22:47:13.890741] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:15.781 [2024-07-24 22:47:13.890753] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:15.781 #54 NEW cov: 12310 ft: 15667 corp: 39/1962b lim: 85 exec/s: 54 rss: 74Mb L: 59/83 MS: 1 ChangeBit- 00:08:15.781 [2024-07-24 22:47:13.940738] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:15.781 [2024-07-24 22:47:13.940763] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:15.781 [2024-07-24 22:47:13.940806] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:15.781 [2024-07-24 22:47:13.940818] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:15.781 [2024-07-24 22:47:13.940868] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:15.781 [2024-07-24 22:47:13.940883] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:15.781 #55 NEW cov: 12310 ft: 15682 corp: 40/2021b lim: 85 exec/s: 55 rss: 74Mb L: 59/83 MS: 1 ShuffleBytes- 00:08:15.781 [2024-07-24 22:47:13.980909] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:15.781 [2024-07-24 22:47:13.980935] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:15.781 [2024-07-24 22:47:13.980976] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:15.781 [2024-07-24 22:47:13.980988] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:15.781 [2024-07-24 22:47:13.981042] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:15.781 [2024-07-24 22:47:13.981056] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:16.040 #56 NEW cov: 12310 ft: 15710 corp: 41/2088b lim: 85 exec/s: 56 rss: 74Mb L: 67/83 MS: 1 CopyPart- 00:08:16.040 [2024-07-24 22:47:14.030999] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:16.041 [2024-07-24 22:47:14.031026] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:16.041 [2024-07-24 22:47:14.031071] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:16.041 [2024-07-24 22:47:14.031110] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:16.041 [2024-07-24 22:47:14.031159] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:16.041 [2024-07-24 22:47:14.031172] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:16.041 [2024-07-24 22:47:14.081158] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:16.041 [2024-07-24 22:47:14.081183] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:16.041 [2024-07-24 22:47:14.081232] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:16.041 [2024-07-24 22:47:14.081246] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:16.041 [2024-07-24 22:47:14.081294] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:16.041 [2024-07-24 22:47:14.081307] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:16.041 #58 NEW cov: 12310 ft: 15752 corp: 42/2145b lim: 85 exec/s: 29 rss: 74Mb L: 57/83 MS: 2 ChangeBit-CrossOver- 00:08:16.041 #58 DONE cov: 12310 ft: 15752 corp: 42/2145b lim: 85 exec/s: 29 rss: 74Mb 00:08:16.041 ###### Recommended dictionary. ###### 00:08:16.041 "\000\000\002\000" # Uses: 0 00:08:16.041 "\262) 0K\177\000\000" # Uses: 1 00:08:16.041 "\004\000" # Uses: 0 00:08:16.041 ###### End of recommended dictionary. ###### 00:08:16.041 Done 58 runs in 2 second(s) 00:08:16.041 22:47:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_22.conf /var/tmp/suppress_nvmf_fuzz 00:08:16.041 22:47:14 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:16.041 22:47:14 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:16.041 22:47:14 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 23 1 0x1 00:08:16.041 22:47:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=23 00:08:16.041 22:47:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:16.041 22:47:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:16.041 22:47:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 00:08:16.041 22:47:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_23.conf 00:08:16.041 22:47:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:16.041 22:47:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:16.041 22:47:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 23 00:08:16.041 22:47:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4423 00:08:16.041 22:47:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 00:08:16.041 22:47:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4423' 00:08:16.041 22:47:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4423"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:16.041 22:47:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:16.041 22:47:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:16.041 22:47:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4423' -c /tmp/fuzz_json_23.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 -Z 23 00:08:16.300 [2024-07-24 22:47:14.263828] Starting SPDK v24.09-pre git sha1 f41dbc235 / DPDK 24.03.0 initialization... 00:08:16.300 [2024-07-24 22:47:14.263912] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid488233 ] 00:08:16.300 EAL: No free 2048 kB hugepages reported on node 1 00:08:16.300 [2024-07-24 22:47:14.439138] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:16.300 [2024-07-24 22:47:14.502828] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:16.559 [2024-07-24 22:47:14.561348] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:16.559 [2024-07-24 22:47:14.577582] tcp.c:1007:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4423 *** 00:08:16.559 INFO: Running with entropic power schedule (0xFF, 100). 00:08:16.559 INFO: Seed: 654859558 00:08:16.559 INFO: Loaded 1 modules (358985 inline 8-bit counters): 358985 [0x29c620c, 0x2a1dc55), 00:08:16.559 INFO: Loaded 1 PC tables (358985 PCs): 358985 [0x2a1dc58,0x2f980e8), 00:08:16.559 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 00:08:16.559 INFO: A corpus is not provided, starting from an empty corpus 00:08:16.559 #2 INITED exec/s: 0 rss: 64Mb 00:08:16.559 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:16.559 This may also happen if the target rejected all inputs we tried so far 00:08:16.559 [2024-07-24 22:47:14.622231] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:16.559 [2024-07-24 22:47:14.622272] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:16.559 [2024-07-24 22:47:14.622302] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:16.559 [2024-07-24 22:47:14.622317] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:16.817 NEW_FUNC[1/700]: 0x4ae840 in fuzz_nvm_reservation_report_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:671 00:08:16.817 NEW_FUNC[2/700]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:16.817 #3 NEW cov: 11990 ft: 12007 corp: 2/12b lim: 25 exec/s: 0 rss: 71Mb L: 11/11 MS: 1 InsertRepeatedBytes- 00:08:16.817 [2024-07-24 22:47:14.802624] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:16.817 [2024-07-24 22:47:14.802663] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:16.817 NEW_FUNC[1/1]: 0x4cb770 in malloc_completion_poller /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/module/bdev/malloc/bdev_malloc.c:865 00:08:16.817 #4 NEW cov: 12129 ft: 12999 corp: 3/18b lim: 25 exec/s: 0 rss: 71Mb L: 6/11 MS: 1 InsertRepeatedBytes- 00:08:16.817 [2024-07-24 22:47:14.862706] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:16.817 [2024-07-24 22:47:14.862734] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:16.817 [2024-07-24 22:47:14.862763] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:16.817 [2024-07-24 22:47:14.862778] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:16.817 #5 NEW cov: 12135 ft: 13231 corp: 4/30b lim: 25 exec/s: 0 rss: 71Mb L: 12/12 MS: 1 InsertByte- 00:08:16.817 [2024-07-24 22:47:14.942841] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:16.817 [2024-07-24 22:47:14.942868] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:16.817 #6 NEW cov: 12220 ft: 13519 corp: 5/37b lim: 25 exec/s: 0 rss: 71Mb L: 7/12 MS: 1 InsertByte- 00:08:16.817 [2024-07-24 22:47:15.023108] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:16.817 [2024-07-24 22:47:15.023136] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:17.075 #7 NEW cov: 12220 ft: 13674 corp: 6/45b lim: 25 exec/s: 0 rss: 72Mb L: 8/12 MS: 1 CrossOver- 00:08:17.075 [2024-07-24 22:47:15.103362] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:17.075 [2024-07-24 22:47:15.103389] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:17.075 [2024-07-24 22:47:15.103420] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:17.075 [2024-07-24 22:47:15.103435] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:17.075 #8 NEW cov: 12220 ft: 13768 corp: 7/57b lim: 25 exec/s: 0 rss: 72Mb L: 12/12 MS: 1 ChangeBit- 00:08:17.075 [2024-07-24 22:47:15.183534] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:17.075 [2024-07-24 22:47:15.183563] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:17.075 #9 NEW cov: 12220 ft: 13866 corp: 8/63b lim: 25 exec/s: 0 rss: 72Mb L: 6/12 MS: 1 ChangeByte- 00:08:17.075 [2024-07-24 22:47:15.243675] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:17.075 [2024-07-24 22:47:15.243702] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:17.333 #10 NEW cov: 12220 ft: 13924 corp: 9/71b lim: 25 exec/s: 0 rss: 72Mb L: 8/12 MS: 1 CrossOver- 00:08:17.334 [2024-07-24 22:47:15.303830] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:17.334 [2024-07-24 22:47:15.303856] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:17.334 #11 NEW cov: 12220 ft: 13955 corp: 10/80b lim: 25 exec/s: 0 rss: 72Mb L: 9/12 MS: 1 InsertByte- 00:08:17.334 [2024-07-24 22:47:15.384042] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:17.334 [2024-07-24 22:47:15.384081] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:17.334 #12 NEW cov: 12220 ft: 14019 corp: 11/87b lim: 25 exec/s: 0 rss: 72Mb L: 7/12 MS: 1 ShuffleBytes- 00:08:17.334 [2024-07-24 22:47:15.444840] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:17.334 [2024-07-24 22:47:15.444870] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:17.334 #13 NEW cov: 12220 ft: 14173 corp: 12/94b lim: 25 exec/s: 0 rss: 72Mb L: 7/12 MS: 1 ChangeByte- 00:08:17.334 [2024-07-24 22:47:15.484917] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:17.334 [2024-07-24 22:47:15.484943] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:17.334 NEW_FUNC[1/1]: 0x1a887d0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:08:17.334 #14 NEW cov: 12243 ft: 14254 corp: 13/101b lim: 25 exec/s: 0 rss: 72Mb L: 7/12 MS: 1 CMP- DE: "\004\000"- 00:08:17.334 [2024-07-24 22:47:15.535190] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:17.334 [2024-07-24 22:47:15.535217] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:17.334 [2024-07-24 22:47:15.535257] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:17.334 [2024-07-24 22:47:15.535271] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:17.592 #15 NEW cov: 12243 ft: 14351 corp: 14/112b lim: 25 exec/s: 0 rss: 72Mb L: 11/12 MS: 1 InsertRepeatedBytes- 00:08:17.592 [2024-07-24 22:47:15.575191] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:17.592 [2024-07-24 22:47:15.575217] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:17.592 #16 NEW cov: 12243 ft: 14374 corp: 15/119b lim: 25 exec/s: 0 rss: 72Mb L: 7/12 MS: 1 CopyPart- 00:08:17.592 [2024-07-24 22:47:15.615288] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:17.592 [2024-07-24 22:47:15.615314] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:17.592 #17 NEW cov: 12243 ft: 14393 corp: 16/126b lim: 25 exec/s: 17 rss: 72Mb L: 7/12 MS: 1 ChangeBinInt- 00:08:17.592 [2024-07-24 22:47:15.665664] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:17.592 [2024-07-24 22:47:15.665690] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:17.592 [2024-07-24 22:47:15.665730] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:17.592 [2024-07-24 22:47:15.665743] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:17.592 [2024-07-24 22:47:15.665796] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:17.592 [2024-07-24 22:47:15.665810] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:17.592 #18 NEW cov: 12243 ft: 14659 corp: 17/141b lim: 25 exec/s: 18 rss: 72Mb L: 15/15 MS: 1 CrossOver- 00:08:17.592 [2024-07-24 22:47:15.705685] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:17.592 [2024-07-24 22:47:15.705712] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:17.592 [2024-07-24 22:47:15.705753] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:17.592 [2024-07-24 22:47:15.705770] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:17.592 #19 NEW cov: 12243 ft: 14676 corp: 18/153b lim: 25 exec/s: 19 rss: 72Mb L: 12/15 MS: 1 ShuffleBytes- 00:08:17.592 [2024-07-24 22:47:15.745648] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:17.592 [2024-07-24 22:47:15.745673] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:17.592 #20 NEW cov: 12243 ft: 14697 corp: 19/162b lim: 25 exec/s: 20 rss: 72Mb L: 9/15 MS: 1 CopyPart- 00:08:17.592 [2024-07-24 22:47:15.795824] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:17.592 [2024-07-24 22:47:15.795851] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:17.851 #21 NEW cov: 12243 ft: 14711 corp: 20/169b lim: 25 exec/s: 21 rss: 72Mb L: 7/15 MS: 1 ChangeBinInt- 00:08:17.851 [2024-07-24 22:47:15.846252] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:17.851 [2024-07-24 22:47:15.846277] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:17.851 [2024-07-24 22:47:15.846323] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:17.851 [2024-07-24 22:47:15.846336] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:17.851 [2024-07-24 22:47:15.846390] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:17.851 [2024-07-24 22:47:15.846405] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:17.851 #22 NEW cov: 12243 ft: 14811 corp: 21/186b lim: 25 exec/s: 22 rss: 72Mb L: 17/17 MS: 1 PersAutoDict- DE: "\004\000"- 00:08:17.851 [2024-07-24 22:47:15.896429] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:17.852 [2024-07-24 22:47:15.896454] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:17.852 [2024-07-24 22:47:15.896507] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:17.852 [2024-07-24 22:47:15.896520] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:17.852 [2024-07-24 22:47:15.896573] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:17.852 [2024-07-24 22:47:15.896587] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:17.852 [2024-07-24 22:47:15.896642] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:08:17.852 [2024-07-24 22:47:15.896655] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:17.852 #23 NEW cov: 12243 ft: 15310 corp: 22/206b lim: 25 exec/s: 23 rss: 72Mb L: 20/20 MS: 1 CrossOver- 00:08:17.852 [2024-07-24 22:47:15.936292] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:17.852 [2024-07-24 22:47:15.936318] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:17.852 [2024-07-24 22:47:15.936357] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:17.852 [2024-07-24 22:47:15.936370] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:17.852 #26 NEW cov: 12243 ft: 15316 corp: 23/216b lim: 25 exec/s: 26 rss: 72Mb L: 10/20 MS: 3 ChangeBit-ShuffleBytes-InsertRepeatedBytes- 00:08:17.852 [2024-07-24 22:47:15.976416] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:17.852 [2024-07-24 22:47:15.976444] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:17.852 [2024-07-24 22:47:15.976483] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:17.852 [2024-07-24 22:47:15.976497] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:17.852 #27 NEW cov: 12243 ft: 15397 corp: 24/228b lim: 25 exec/s: 27 rss: 72Mb L: 12/20 MS: 1 ShuffleBytes- 00:08:17.852 [2024-07-24 22:47:16.026437] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:17.852 [2024-07-24 22:47:16.026462] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:17.852 #28 NEW cov: 12243 ft: 15407 corp: 25/236b lim: 25 exec/s: 28 rss: 72Mb L: 8/20 MS: 1 CopyPart- 00:08:18.111 [2024-07-24 22:47:16.066769] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:18.111 [2024-07-24 22:47:16.066795] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:18.111 [2024-07-24 22:47:16.066848] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:18.111 [2024-07-24 22:47:16.066860] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:18.111 [2024-07-24 22:47:16.066914] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:18.111 [2024-07-24 22:47:16.066927] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:18.111 #29 NEW cov: 12243 ft: 15415 corp: 26/251b lim: 25 exec/s: 29 rss: 72Mb L: 15/20 MS: 1 EraseBytes- 00:08:18.111 [2024-07-24 22:47:16.116674] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:18.111 [2024-07-24 22:47:16.116698] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:18.111 #30 NEW cov: 12243 ft: 15441 corp: 27/258b lim: 25 exec/s: 30 rss: 72Mb L: 7/20 MS: 1 ChangeByte- 00:08:18.111 [2024-07-24 22:47:16.156902] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:18.111 [2024-07-24 22:47:16.156928] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:18.111 [2024-07-24 22:47:16.156969] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:18.111 [2024-07-24 22:47:16.156983] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:18.111 #31 NEW cov: 12243 ft: 15448 corp: 28/268b lim: 25 exec/s: 31 rss: 72Mb L: 10/20 MS: 1 CrossOver- 00:08:18.111 [2024-07-24 22:47:16.207498] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:18.111 [2024-07-24 22:47:16.207524] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:18.111 [2024-07-24 22:47:16.207579] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:18.111 [2024-07-24 22:47:16.207593] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:18.111 [2024-07-24 22:47:16.207647] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:18.111 [2024-07-24 22:47:16.207661] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:18.111 #32 NEW cov: 12252 ft: 15501 corp: 29/283b lim: 25 exec/s: 32 rss: 72Mb L: 15/20 MS: 1 CMP- DE: "tbB\346\247c\027\000"- 00:08:18.111 [2024-07-24 22:47:16.247181] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:18.111 [2024-07-24 22:47:16.247207] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:18.111 [2024-07-24 22:47:16.247247] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:18.111 [2024-07-24 22:47:16.247260] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:18.111 #33 NEW cov: 12252 ft: 15559 corp: 30/295b lim: 25 exec/s: 33 rss: 72Mb L: 12/20 MS: 1 ChangeBinInt- 00:08:18.111 [2024-07-24 22:47:16.287733] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:18.111 [2024-07-24 22:47:16.287758] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:18.111 [2024-07-24 22:47:16.287813] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:18.112 [2024-07-24 22:47:16.287823] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:18.112 [2024-07-24 22:47:16.287878] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:18.112 [2024-07-24 22:47:16.287892] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:18.112 [2024-07-24 22:47:16.287945] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:08:18.112 [2024-07-24 22:47:16.287957] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:18.112 [2024-07-24 22:47:16.288011] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:4 nsid:0 00:08:18.112 [2024-07-24 22:47:16.288024] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:08:18.371 #34 NEW cov: 12252 ft: 15609 corp: 31/320b lim: 25 exec/s: 34 rss: 72Mb L: 25/25 MS: 1 CrossOver- 00:08:18.371 [2024-07-24 22:47:16.337430] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:18.371 [2024-07-24 22:47:16.337455] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:18.371 [2024-07-24 22:47:16.337494] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:18.371 [2024-07-24 22:47:16.337508] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:18.371 #35 NEW cov: 12252 ft: 15611 corp: 32/332b lim: 25 exec/s: 35 rss: 73Mb L: 12/25 MS: 1 CrossOver- 00:08:18.371 [2024-07-24 22:47:16.387483] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:18.371 [2024-07-24 22:47:16.387508] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:18.371 #36 NEW cov: 12252 ft: 15620 corp: 33/337b lim: 25 exec/s: 36 rss: 73Mb L: 5/25 MS: 1 EraseBytes- 00:08:18.371 [2024-07-24 22:47:16.437610] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:18.371 [2024-07-24 22:47:16.437635] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:18.371 #37 NEW cov: 12252 ft: 15637 corp: 34/344b lim: 25 exec/s: 37 rss: 73Mb L: 7/25 MS: 1 ChangeByte- 00:08:18.371 [2024-07-24 22:47:16.487757] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:18.371 [2024-07-24 22:47:16.487783] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:18.371 #38 NEW cov: 12252 ft: 15641 corp: 35/351b lim: 25 exec/s: 38 rss: 73Mb L: 7/25 MS: 1 CopyPart- 00:08:18.371 [2024-07-24 22:47:16.538252] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:18.371 [2024-07-24 22:47:16.538277] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:18.371 [2024-07-24 22:47:16.538330] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:18.371 [2024-07-24 22:47:16.538342] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:18.371 [2024-07-24 22:47:16.538397] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:18.371 [2024-07-24 22:47:16.538411] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:18.371 [2024-07-24 22:47:16.538468] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:08:18.371 [2024-07-24 22:47:16.538480] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:18.371 #39 NEW cov: 12252 ft: 15644 corp: 36/373b lim: 25 exec/s: 39 rss: 73Mb L: 22/25 MS: 1 CrossOver- 00:08:18.630 [2024-07-24 22:47:16.588011] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:18.630 [2024-07-24 22:47:16.588037] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:18.630 #40 NEW cov: 12252 ft: 15650 corp: 37/380b lim: 25 exec/s: 20 rss: 73Mb L: 7/25 MS: 1 ChangeBit- 00:08:18.630 #40 DONE cov: 12252 ft: 15650 corp: 37/380b lim: 25 exec/s: 20 rss: 73Mb 00:08:18.630 ###### Recommended dictionary. ###### 00:08:18.630 "\004\000" # Uses: 1 00:08:18.630 "tbB\346\247c\027\000" # Uses: 0 00:08:18.630 ###### End of recommended dictionary. ###### 00:08:18.630 Done 40 runs in 2 second(s) 00:08:18.630 22:47:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_23.conf /var/tmp/suppress_nvmf_fuzz 00:08:18.630 22:47:16 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:18.630 22:47:16 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:18.630 22:47:16 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 24 1 0x1 00:08:18.630 22:47:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=24 00:08:18.630 22:47:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:18.631 22:47:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:18.631 22:47:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 00:08:18.631 22:47:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_24.conf 00:08:18.631 22:47:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:18.631 22:47:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:18.631 22:47:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 24 00:08:18.631 22:47:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4424 00:08:18.631 22:47:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 00:08:18.631 22:47:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4424' 00:08:18.631 22:47:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4424"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:18.631 22:47:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:18.631 22:47:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:18.631 22:47:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4424' -c /tmp/fuzz_json_24.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 -Z 24 00:08:18.631 [2024-07-24 22:47:16.766734] Starting SPDK v24.09-pre git sha1 f41dbc235 / DPDK 24.03.0 initialization... 00:08:18.631 [2024-07-24 22:47:16.766825] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid488546 ] 00:08:18.631 EAL: No free 2048 kB hugepages reported on node 1 00:08:18.890 [2024-07-24 22:47:16.952381] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:18.890 [2024-07-24 22:47:17.017501] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:18.890 [2024-07-24 22:47:17.075942] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:18.890 [2024-07-24 22:47:17.092211] tcp.c:1007:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4424 *** 00:08:19.149 INFO: Running with entropic power schedule (0xFF, 100). 00:08:19.149 INFO: Seed: 3167874361 00:08:19.149 INFO: Loaded 1 modules (358985 inline 8-bit counters): 358985 [0x29c620c, 0x2a1dc55), 00:08:19.149 INFO: Loaded 1 PC tables (358985 PCs): 358985 [0x2a1dc58,0x2f980e8), 00:08:19.149 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 00:08:19.149 INFO: A corpus is not provided, starting from an empty corpus 00:08:19.149 #2 INITED exec/s: 0 rss: 64Mb 00:08:19.149 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:19.149 This may also happen if the target rejected all inputs we tried so far 00:08:19.149 [2024-07-24 22:47:17.147638] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069583077375 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.149 [2024-07-24 22:47:17.147665] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:19.149 [2024-07-24 22:47:17.147704] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.149 [2024-07-24 22:47:17.147717] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:19.149 [2024-07-24 22:47:17.147766] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.149 [2024-07-24 22:47:17.147780] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:19.149 NEW_FUNC[1/702]: 0x4af920 in fuzz_nvm_compare_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:685 00:08:19.149 NEW_FUNC[2/702]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:19.149 #5 NEW cov: 12086 ft: 12083 corp: 2/68b lim: 100 exec/s: 0 rss: 71Mb L: 67/67 MS: 3 CopyPart-CrossOver-InsertRepeatedBytes- 00:08:19.149 [2024-07-24 22:47:17.310478] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:10344644715844964239 len:36752 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.149 [2024-07-24 22:47:17.310525] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:19.149 [2024-07-24 22:47:17.310609] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:10344644715844964239 len:36752 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.149 [2024-07-24 22:47:17.310628] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:19.149 [2024-07-24 22:47:17.310723] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:10344644715844964239 len:36752 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.149 [2024-07-24 22:47:17.310744] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:19.149 [2024-07-24 22:47:17.310844] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:10344644715844964239 len:36752 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.149 [2024-07-24 22:47:17.310864] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:19.149 [2024-07-24 22:47:17.310960] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:4 nsid:0 lba:10344644715844964239 len:36752 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.149 [2024-07-24 22:47:17.310979] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:08:19.149 #8 NEW cov: 12201 ft: 13011 corp: 3/168b lim: 100 exec/s: 0 rss: 71Mb L: 100/100 MS: 3 CopyPart-ChangeBit-InsertRepeatedBytes- 00:08:19.408 [2024-07-24 22:47:17.369211] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.408 [2024-07-24 22:47:17.369238] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:19.408 #10 NEW cov: 12207 ft: 14131 corp: 4/195b lim: 100 exec/s: 0 rss: 71Mb L: 27/100 MS: 2 CopyPart-CrossOver- 00:08:19.408 [2024-07-24 22:47:17.420047] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446743472582623231 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.408 [2024-07-24 22:47:17.420077] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:19.408 [2024-07-24 22:47:17.420147] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.408 [2024-07-24 22:47:17.420162] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:19.408 [2024-07-24 22:47:17.420246] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.408 [2024-07-24 22:47:17.420262] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:19.408 #16 NEW cov: 12292 ft: 14347 corp: 5/262b lim: 100 exec/s: 0 rss: 71Mb L: 67/100 MS: 1 ChangeByte- 00:08:19.408 [2024-07-24 22:47:17.479583] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.408 [2024-07-24 22:47:17.479608] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:19.408 #17 NEW cov: 12292 ft: 14409 corp: 6/290b lim: 100 exec/s: 0 rss: 71Mb L: 28/100 MS: 1 InsertByte- 00:08:19.409 [2024-07-24 22:47:17.539885] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.409 [2024-07-24 22:47:17.539911] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:19.409 #18 NEW cov: 12292 ft: 14493 corp: 7/317b lim: 100 exec/s: 0 rss: 71Mb L: 27/100 MS: 1 ChangeBinInt- 00:08:19.409 [2024-07-24 22:47:17.590736] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446743472582623231 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.409 [2024-07-24 22:47:17.590761] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:19.409 [2024-07-24 22:47:17.590839] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.409 [2024-07-24 22:47:17.590854] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:19.409 [2024-07-24 22:47:17.590938] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.409 [2024-07-24 22:47:17.590953] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:19.668 #19 NEW cov: 12292 ft: 14533 corp: 8/384b lim: 100 exec/s: 0 rss: 72Mb L: 67/100 MS: 1 ChangeByte- 00:08:19.668 [2024-07-24 22:47:17.651032] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446743472582623231 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.668 [2024-07-24 22:47:17.651056] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:19.668 [2024-07-24 22:47:17.651134] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.668 [2024-07-24 22:47:17.651149] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:19.668 [2024-07-24 22:47:17.651231] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.668 [2024-07-24 22:47:17.651246] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:19.668 #25 NEW cov: 12292 ft: 14569 corp: 9/451b lim: 100 exec/s: 0 rss: 72Mb L: 67/100 MS: 1 CopyPart- 00:08:19.668 [2024-07-24 22:47:17.710719] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.668 [2024-07-24 22:47:17.710743] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:19.668 #26 NEW cov: 12292 ft: 14593 corp: 10/478b lim: 100 exec/s: 0 rss: 72Mb L: 27/100 MS: 1 CopyPart- 00:08:19.668 [2024-07-24 22:47:17.770881] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.668 [2024-07-24 22:47:17.770906] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:19.668 #27 NEW cov: 12292 ft: 14648 corp: 11/507b lim: 100 exec/s: 0 rss: 72Mb L: 29/100 MS: 1 InsertByte- 00:08:19.668 [2024-07-24 22:47:17.831405] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069599133606 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.668 [2024-07-24 22:47:17.831429] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:19.668 #28 NEW cov: 12292 ft: 14692 corp: 12/534b lim: 100 exec/s: 0 rss: 72Mb L: 27/100 MS: 1 ChangeByte- 00:08:19.927 [2024-07-24 22:47:17.882240] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.927 [2024-07-24 22:47:17.882271] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:19.927 [2024-07-24 22:47:17.882347] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18375530900491337727 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.927 [2024-07-24 22:47:17.882361] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:19.927 [2024-07-24 22:47:17.882455] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.927 [2024-07-24 22:47:17.882470] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:19.927 #29 NEW cov: 12292 ft: 14730 corp: 13/595b lim: 100 exec/s: 0 rss: 72Mb L: 61/100 MS: 1 CrossOver- 00:08:19.927 [2024-07-24 22:47:17.932742] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069583077375 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.927 [2024-07-24 22:47:17.932768] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:19.927 [2024-07-24 22:47:17.932866] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65410 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.927 [2024-07-24 22:47:17.932881] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:19.927 [2024-07-24 22:47:17.932962] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.927 [2024-07-24 22:47:17.932977] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:19.927 [2024-07-24 22:47:17.933070] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:18446743532543672319 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.927 [2024-07-24 22:47:17.933091] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:19.927 #30 NEW cov: 12292 ft: 14791 corp: 14/690b lim: 100 exec/s: 0 rss: 72Mb L: 95/100 MS: 1 CopyPart- 00:08:19.927 [2024-07-24 22:47:17.982141] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069599133606 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.927 [2024-07-24 22:47:17.982166] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:19.927 NEW_FUNC[1/1]: 0x1a887d0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:08:19.927 #31 NEW cov: 12315 ft: 14829 corp: 15/725b lim: 100 exec/s: 0 rss: 72Mb L: 35/100 MS: 1 CMP- DE: "\377\026c\250\302\331\004\006"- 00:08:19.927 [2024-07-24 22:47:18.052576] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18374686479856173055 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.927 [2024-07-24 22:47:18.052607] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:19.927 #32 NEW cov: 12315 ft: 14865 corp: 16/752b lim: 100 exec/s: 0 rss: 72Mb L: 27/100 MS: 1 ChangeBinInt- 00:08:19.927 [2024-07-24 22:47:18.103766] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069583077375 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.927 [2024-07-24 22:47:18.103792] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:19.927 [2024-07-24 22:47:18.103885] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65410 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.927 [2024-07-24 22:47:18.103902] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:19.927 [2024-07-24 22:47:18.103986] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.927 [2024-07-24 22:47:18.104004] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:19.927 [2024-07-24 22:47:18.104091] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:18446743532543672319 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.927 [2024-07-24 22:47:18.104106] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:20.187 #33 NEW cov: 12315 ft: 14981 corp: 17/847b lim: 100 exec/s: 33 rss: 72Mb L: 95/100 MS: 1 CopyPart- 00:08:20.187 [2024-07-24 22:47:18.163148] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.187 [2024-07-24 22:47:18.163175] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:20.187 #34 NEW cov: 12315 ft: 15005 corp: 18/874b lim: 100 exec/s: 34 rss: 72Mb L: 27/100 MS: 1 ShuffleBytes- 00:08:20.187 [2024-07-24 22:47:18.213992] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446743764640399359 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.187 [2024-07-24 22:47:18.214020] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:20.187 [2024-07-24 22:47:18.214111] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.187 [2024-07-24 22:47:18.214130] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:20.187 [2024-07-24 22:47:18.214220] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.187 [2024-07-24 22:47:18.214234] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:20.187 #35 NEW cov: 12315 ft: 15016 corp: 19/942b lim: 100 exec/s: 35 rss: 72Mb L: 68/100 MS: 1 InsertByte- 00:08:20.187 [2024-07-24 22:47:18.264393] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.187 [2024-07-24 22:47:18.264419] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:20.187 [2024-07-24 22:47:18.264517] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18375530900491337727 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.187 [2024-07-24 22:47:18.264536] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:20.187 [2024-07-24 22:47:18.264621] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18425633450456252415 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.187 [2024-07-24 22:47:18.264637] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:20.187 #36 NEW cov: 12315 ft: 15038 corp: 20/1004b lim: 100 exec/s: 36 rss: 72Mb L: 62/100 MS: 1 InsertByte- 00:08:20.187 [2024-07-24 22:47:18.324206] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.187 [2024-07-24 22:47:18.324231] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:20.187 #37 NEW cov: 12315 ft: 15055 corp: 21/1031b lim: 100 exec/s: 37 rss: 72Mb L: 27/100 MS: 1 CopyPart- 00:08:20.187 [2024-07-24 22:47:18.384335] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65303 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.187 [2024-07-24 22:47:18.384360] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:20.445 #38 NEW cov: 12315 ft: 15071 corp: 22/1051b lim: 100 exec/s: 38 rss: 72Mb L: 20/100 MS: 1 EraseBytes- 00:08:20.445 [2024-07-24 22:47:18.454926] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:270480028205056 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.445 [2024-07-24 22:47:18.454952] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:20.445 [2024-07-24 22:47:18.455026] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.445 [2024-07-24 22:47:18.455041] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:20.445 #42 NEW cov: 12315 ft: 15419 corp: 23/1105b lim: 100 exec/s: 42 rss: 72Mb L: 54/100 MS: 4 InsertRepeatedBytes-ShuffleBytes-ChangeBinInt-InsertRepeatedBytes- 00:08:20.445 [2024-07-24 22:47:18.504814] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069599133606 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.445 [2024-07-24 22:47:18.504840] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:20.445 #43 NEW cov: 12315 ft: 15437 corp: 24/1132b lim: 100 exec/s: 43 rss: 72Mb L: 27/100 MS: 1 ChangeBinInt- 00:08:20.445 [2024-07-24 22:47:18.555016] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:32486 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.445 [2024-07-24 22:47:18.555041] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:20.445 #44 NEW cov: 12315 ft: 15451 corp: 25/1160b lim: 100 exec/s: 44 rss: 72Mb L: 28/100 MS: 1 CMP- DE: "\377\377~\345\214\020+\243"- 00:08:20.445 [2024-07-24 22:47:18.615623] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.445 [2024-07-24 22:47:18.615648] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:20.445 [2024-07-24 22:47:18.615708] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.445 [2024-07-24 22:47:18.615721] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:20.445 #45 NEW cov: 12315 ft: 15479 corp: 26/1202b lim: 100 exec/s: 45 rss: 72Mb L: 42/100 MS: 1 InsertRepeatedBytes- 00:08:20.704 [2024-07-24 22:47:18.666291] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446743764640399359 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.704 [2024-07-24 22:47:18.666320] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:20.704 [2024-07-24 22:47:18.666396] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.704 [2024-07-24 22:47:18.666409] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:20.704 [2024-07-24 22:47:18.666488] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.704 [2024-07-24 22:47:18.666500] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:20.704 #46 NEW cov: 12315 ft: 15498 corp: 27/1270b lim: 100 exec/s: 46 rss: 73Mb L: 68/100 MS: 1 ChangeBit- 00:08:20.704 [2024-07-24 22:47:18.725804] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069599133606 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.704 [2024-07-24 22:47:18.725828] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:20.704 #47 NEW cov: 12315 ft: 15503 corp: 28/1297b lim: 100 exec/s: 47 rss: 73Mb L: 27/100 MS: 1 CrossOver- 00:08:20.704 [2024-07-24 22:47:18.786057] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446743515548352511 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.704 [2024-07-24 22:47:18.786087] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:20.704 #48 NEW cov: 12315 ft: 15511 corp: 29/1324b lim: 100 exec/s: 48 rss: 73Mb L: 27/100 MS: 1 ChangeByte- 00:08:20.704 [2024-07-24 22:47:18.846654] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.704 [2024-07-24 22:47:18.846678] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:20.704 [2024-07-24 22:47:18.846740] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.704 [2024-07-24 22:47:18.846757] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:20.704 #49 NEW cov: 12315 ft: 15517 corp: 30/1366b lim: 100 exec/s: 49 rss: 73Mb L: 42/100 MS: 1 CrossOver- 00:08:20.704 [2024-07-24 22:47:18.906682] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18392700874070687654 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.704 [2024-07-24 22:47:18.906706] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:20.963 #50 NEW cov: 12315 ft: 15525 corp: 31/1394b lim: 100 exec/s: 50 rss: 73Mb L: 28/100 MS: 1 InsertByte- 00:08:20.963 [2024-07-24 22:47:18.966802] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.963 [2024-07-24 22:47:18.966828] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:20.963 #51 NEW cov: 12315 ft: 15530 corp: 32/1424b lim: 100 exec/s: 51 rss: 73Mb L: 30/100 MS: 1 InsertByte- 00:08:20.963 [2024-07-24 22:47:19.027164] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446462598917390335 len:65303 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.963 [2024-07-24 22:47:19.027189] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:20.963 #52 NEW cov: 12315 ft: 15572 corp: 33/1444b lim: 100 exec/s: 52 rss: 73Mb L: 20/100 MS: 1 ChangeBinInt- 00:08:20.963 [2024-07-24 22:47:19.077662] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:270480028205056 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.963 [2024-07-24 22:47:19.077689] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:20.963 [2024-07-24 22:47:19.077752] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.963 [2024-07-24 22:47:19.077766] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:20.963 #53 NEW cov: 12315 ft: 15588 corp: 34/1498b lim: 100 exec/s: 53 rss: 74Mb L: 54/100 MS: 1 ShuffleBytes- 00:08:20.963 [2024-07-24 22:47:19.137696] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65287 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.963 [2024-07-24 22:47:19.137722] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:20.963 #54 NEW cov: 12315 ft: 15598 corp: 35/1525b lim: 100 exec/s: 27 rss: 74Mb L: 27/100 MS: 1 ChangeBinInt- 00:08:20.963 #54 DONE cov: 12315 ft: 15598 corp: 35/1525b lim: 100 exec/s: 27 rss: 74Mb 00:08:20.963 ###### Recommended dictionary. ###### 00:08:20.963 "\377\026c\250\302\331\004\006" # Uses: 0 00:08:20.963 "\377\377~\345\214\020+\243" # Uses: 0 00:08:20.963 ###### End of recommended dictionary. ###### 00:08:20.963 Done 54 runs in 2 second(s) 00:08:21.223 22:47:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_24.conf /var/tmp/suppress_nvmf_fuzz 00:08:21.223 22:47:19 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:21.223 22:47:19 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:21.223 22:47:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@79 -- # trap - SIGINT SIGTERM EXIT 00:08:21.223 00:08:21.223 real 1m5.292s 00:08:21.223 user 1m45.318s 00:08:21.223 sys 0m7.641s 00:08:21.223 22:47:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:21.223 22:47:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:08:21.223 ************************************ 00:08:21.223 END TEST nvmf_llvm_fuzz 00:08:21.223 ************************************ 00:08:21.223 22:47:19 llvm_fuzz -- fuzz/llvm.sh@60 -- # for fuzzer in "${fuzzers[@]}" 00:08:21.223 22:47:19 llvm_fuzz -- fuzz/llvm.sh@61 -- # case "$fuzzer" in 00:08:21.223 22:47:19 llvm_fuzz -- fuzz/llvm.sh@63 -- # run_test vfio_llvm_fuzz /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/run.sh 00:08:21.223 22:47:19 llvm_fuzz -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:21.223 22:47:19 llvm_fuzz -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:21.223 22:47:19 llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:08:21.223 ************************************ 00:08:21.223 START TEST vfio_llvm_fuzz 00:08:21.223 ************************************ 00:08:21.223 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/run.sh 00:08:21.223 * Looking for test storage... 00:08:21.485 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:08:21.485 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@64 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/common.sh 00:08:21.485 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- setup/common.sh@6 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh 00:08:21.485 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:08:21.485 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@34 -- # set -e 00:08:21.485 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:08:21.485 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@36 -- # shopt -s extglob 00:08:21.485 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:08:21.485 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output ']' 00:08:21.485 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh ]] 00:08:21.485 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh 00:08:21.485 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:08:21.485 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:08:21.485 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:08:21.485 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:08:21.485 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:08:21.485 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:08:21.485 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:08:21.485 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:08:21.485 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:08:21.485 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:08:21.485 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:08:21.485 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:08:21.485 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:08:21.485 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:08:21.485 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:08:21.485 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:08:21.485 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:08:21.485 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:08:21.485 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:08:21.485 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:08:21.485 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:08:21.485 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@22 -- # CONFIG_CET=n 00:08:21.485 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:08:21.485 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:08:21.485 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:08:21.485 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:08:21.485 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:08:21.485 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:08:21.485 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:08:21.485 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:08:21.485 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:08:21.485 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:08:21.485 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:08:21.485 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB=/usr/lib64/clang/16/lib/libclang_rt.fuzzer_no_main-x86_64.a 00:08:21.485 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@35 -- # CONFIG_FUZZER=y 00:08:21.485 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:08:21.485 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:08:21.485 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:08:21.485 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:08:21.485 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:08:21.485 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:08:21.485 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:08:21.485 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:08:21.486 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:08:21.486 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:08:21.486 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:08:21.486 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:08:21.486 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:08:21.486 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:08:21.486 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:08:21.486 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:08:21.486 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:08:21.486 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:08:21.486 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:08:21.486 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:08:21.486 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:08:21.486 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:08:21.486 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:08:21.486 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:08:21.486 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:08:21.486 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:08:21.486 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:08:21.486 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:08:21.486 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:08:21.486 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:08:21.486 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@66 -- # CONFIG_SHARED=n 00:08:21.486 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:08:21.486 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:08:21.486 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:08:21.486 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@70 -- # CONFIG_FC=n 00:08:21.486 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:08:21.486 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:08:21.486 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:08:21.486 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:08:21.486 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:08:21.486 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:08:21.486 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:08:21.486 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:08:21.486 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:08:21.486 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:08:21.486 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:08:21.486 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:08:21.486 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@83 -- # CONFIG_URING=n 00:08:21.486 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:08:21.486 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:08:21.486 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:08:21.486 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:08:21.486 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:08:21.486 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:08:21.486 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:08:21.486 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:08:21.486 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:08:21.486 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:08:21.486 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:08:21.486 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:08:21.486 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:08:21.486 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:08:21.486 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/config.h ]] 00:08:21.486 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:08:21.486 #define SPDK_CONFIG_H 00:08:21.486 #define SPDK_CONFIG_APPS 1 00:08:21.486 #define SPDK_CONFIG_ARCH native 00:08:21.486 #undef SPDK_CONFIG_ASAN 00:08:21.486 #undef SPDK_CONFIG_AVAHI 00:08:21.486 #undef SPDK_CONFIG_CET 00:08:21.486 #define SPDK_CONFIG_COVERAGE 1 00:08:21.486 #define SPDK_CONFIG_CROSS_PREFIX 00:08:21.486 #undef SPDK_CONFIG_CRYPTO 00:08:21.486 #undef SPDK_CONFIG_CRYPTO_MLX5 00:08:21.486 #undef SPDK_CONFIG_CUSTOMOCF 00:08:21.486 #undef SPDK_CONFIG_DAOS 00:08:21.486 #define SPDK_CONFIG_DAOS_DIR 00:08:21.486 #define SPDK_CONFIG_DEBUG 1 00:08:21.486 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:08:21.486 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:08:21.486 #define SPDK_CONFIG_DPDK_INC_DIR 00:08:21.486 #define SPDK_CONFIG_DPDK_LIB_DIR 00:08:21.486 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:08:21.486 #undef SPDK_CONFIG_DPDK_UADK 00:08:21.486 #define SPDK_CONFIG_ENV /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:08:21.486 #define SPDK_CONFIG_EXAMPLES 1 00:08:21.486 #undef SPDK_CONFIG_FC 00:08:21.486 #define SPDK_CONFIG_FC_PATH 00:08:21.486 #define SPDK_CONFIG_FIO_PLUGIN 1 00:08:21.486 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:08:21.486 #undef SPDK_CONFIG_FUSE 00:08:21.486 #define SPDK_CONFIG_FUZZER 1 00:08:21.486 #define SPDK_CONFIG_FUZZER_LIB /usr/lib64/clang/16/lib/libclang_rt.fuzzer_no_main-x86_64.a 00:08:21.486 #undef SPDK_CONFIG_GOLANG 00:08:21.486 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:08:21.486 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:08:21.486 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:08:21.486 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:08:21.486 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:08:21.486 #undef SPDK_CONFIG_HAVE_LIBBSD 00:08:21.486 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:08:21.486 #define SPDK_CONFIG_IDXD 1 00:08:21.486 #define SPDK_CONFIG_IDXD_KERNEL 1 00:08:21.486 #undef SPDK_CONFIG_IPSEC_MB 00:08:21.486 #define SPDK_CONFIG_IPSEC_MB_DIR 00:08:21.486 #define SPDK_CONFIG_ISAL 1 00:08:21.486 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:08:21.486 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:08:21.486 #define SPDK_CONFIG_LIBDIR 00:08:21.486 #undef SPDK_CONFIG_LTO 00:08:21.486 #define SPDK_CONFIG_MAX_LCORES 128 00:08:21.486 #define SPDK_CONFIG_NVME_CUSE 1 00:08:21.486 #undef SPDK_CONFIG_OCF 00:08:21.486 #define SPDK_CONFIG_OCF_PATH 00:08:21.486 #define SPDK_CONFIG_OPENSSL_PATH 00:08:21.486 #undef SPDK_CONFIG_PGO_CAPTURE 00:08:21.486 #define SPDK_CONFIG_PGO_DIR 00:08:21.486 #undef SPDK_CONFIG_PGO_USE 00:08:21.486 #define SPDK_CONFIG_PREFIX /usr/local 00:08:21.486 #undef SPDK_CONFIG_RAID5F 00:08:21.486 #undef SPDK_CONFIG_RBD 00:08:21.486 #define SPDK_CONFIG_RDMA 1 00:08:21.486 #define SPDK_CONFIG_RDMA_PROV verbs 00:08:21.486 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:08:21.486 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:08:21.486 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:08:21.486 #undef SPDK_CONFIG_SHARED 00:08:21.486 #undef SPDK_CONFIG_SMA 00:08:21.486 #define SPDK_CONFIG_TESTS 1 00:08:21.486 #undef SPDK_CONFIG_TSAN 00:08:21.486 #define SPDK_CONFIG_UBLK 1 00:08:21.486 #define SPDK_CONFIG_UBSAN 1 00:08:21.486 #undef SPDK_CONFIG_UNIT_TESTS 00:08:21.486 #undef SPDK_CONFIG_URING 00:08:21.486 #define SPDK_CONFIG_URING_PATH 00:08:21.486 #undef SPDK_CONFIG_URING_ZNS 00:08:21.486 #undef SPDK_CONFIG_USDT 00:08:21.486 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:08:21.486 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:08:21.486 #define SPDK_CONFIG_VFIO_USER 1 00:08:21.486 #define SPDK_CONFIG_VFIO_USER_DIR 00:08:21.487 #define SPDK_CONFIG_VHOST 1 00:08:21.487 #define SPDK_CONFIG_VIRTIO 1 00:08:21.487 #undef SPDK_CONFIG_VTUNE 00:08:21.487 #define SPDK_CONFIG_VTUNE_DIR 00:08:21.487 #define SPDK_CONFIG_WERROR 1 00:08:21.487 #define SPDK_CONFIG_WPDK_DIR 00:08:21.487 #undef SPDK_CONFIG_XNVME 00:08:21.487 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:08:21.487 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:08:21.487 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:08:21.487 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:21.487 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:21.487 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:21.487 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:21.487 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:21.487 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:21.487 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- paths/export.sh@5 -- # export PATH 00:08:21.487 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:21.487 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:08:21.487 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- pm/common@6 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:08:21.487 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- pm/common@6 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:08:21.487 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:08:21.487 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- pm/common@7 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/../../../ 00:08:21.487 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:08:21.487 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- pm/common@64 -- # TEST_TAG=N/A 00:08:21.487 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.run_test_name 00:08:21.487 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power 00:08:21.487 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- pm/common@68 -- # uname -s 00:08:21.487 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- pm/common@68 -- # PM_OS=Linux 00:08:21.487 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:08:21.487 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:08:21.487 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:08:21.487 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:08:21.487 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:08:21.487 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:08:21.487 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- pm/common@76 -- # SUDO[0]= 00:08:21.487 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- pm/common@76 -- # SUDO[1]='sudo -E' 00:08:21.487 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:08:21.487 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:08:21.487 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- pm/common@81 -- # [[ Linux == Linux ]] 00:08:21.487 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:08:21.487 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:08:21.487 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:08:21.487 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:08:21.487 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power ]] 00:08:21.487 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@58 -- # : 0 00:08:21.487 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:08:21.487 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@62 -- # : 0 00:08:21.487 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:08:21.487 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@64 -- # : 0 00:08:21.487 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:08:21.487 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@66 -- # : 1 00:08:21.487 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:08:21.487 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@68 -- # : 0 00:08:21.487 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:08:21.487 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@70 -- # : 00:08:21.487 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:08:21.487 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@72 -- # : 0 00:08:21.487 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:08:21.487 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@74 -- # : 0 00:08:21.487 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:08:21.487 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@76 -- # : 0 00:08:21.487 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:08:21.487 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@78 -- # : 0 00:08:21.487 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:08:21.487 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@80 -- # : 0 00:08:21.487 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:08:21.487 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@82 -- # : 0 00:08:21.487 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:08:21.487 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@84 -- # : 0 00:08:21.487 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:08:21.487 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@86 -- # : 0 00:08:21.487 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:08:21.487 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@88 -- # : 0 00:08:21.487 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:08:21.487 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@90 -- # : 0 00:08:21.487 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:08:21.487 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@92 -- # : 0 00:08:21.487 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:08:21.487 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@94 -- # : 0 00:08:21.487 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:08:21.487 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@96 -- # : 0 00:08:21.487 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:08:21.487 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@98 -- # : 1 00:08:21.487 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:08:21.487 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@100 -- # : 1 00:08:21.487 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:08:21.487 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@102 -- # : rdma 00:08:21.487 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:08:21.487 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@104 -- # : 0 00:08:21.487 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:08:21.487 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@106 -- # : 0 00:08:21.487 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:08:21.487 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@108 -- # : 0 00:08:21.487 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:08:21.488 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@110 -- # : 0 00:08:21.488 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:08:21.488 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@112 -- # : 0 00:08:21.488 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:08:21.488 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@114 -- # : 0 00:08:21.488 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:08:21.488 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@116 -- # : 0 00:08:21.488 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:08:21.488 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@118 -- # : 0 00:08:21.488 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:08:21.488 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@120 -- # : 0 00:08:21.488 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:08:21.488 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@122 -- # : 1 00:08:21.488 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:08:21.488 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@124 -- # : 00:08:21.488 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:08:21.488 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@126 -- # : 0 00:08:21.488 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:08:21.488 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@128 -- # : 0 00:08:21.488 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:08:21.488 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@130 -- # : 0 00:08:21.488 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:08:21.488 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@132 -- # : 0 00:08:21.488 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:08:21.488 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@134 -- # : 0 00:08:21.488 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:08:21.488 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@136 -- # : 0 00:08:21.488 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:08:21.488 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@138 -- # : 00:08:21.488 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:08:21.488 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@140 -- # : true 00:08:21.488 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:08:21.488 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@142 -- # : 0 00:08:21.488 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:08:21.488 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@144 -- # : 0 00:08:21.488 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:08:21.488 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@146 -- # : 0 00:08:21.488 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:08:21.488 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@148 -- # : 0 00:08:21.488 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:08:21.488 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@150 -- # : 0 00:08:21.488 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:08:21.488 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@152 -- # : 0 00:08:21.488 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:08:21.488 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@154 -- # : 00:08:21.488 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:08:21.488 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@156 -- # : 0 00:08:21.488 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:08:21.488 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@158 -- # : 0 00:08:21.488 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:08:21.488 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@160 -- # : 0 00:08:21.488 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:08:21.488 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@162 -- # : 0 00:08:21.488 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:08:21.488 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@164 -- # : 0 00:08:21.488 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:08:21.488 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@166 -- # : 0 00:08:21.488 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:08:21.488 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@169 -- # : 00:08:21.488 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:08:21.488 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@171 -- # : 0 00:08:21.488 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:08:21.488 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@173 -- # : 0 00:08:21.488 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:08:21.488 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@177 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:08:21.488 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@177 -- # SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:08:21.488 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@178 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:08:21.488 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@178 -- # DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:08:21.488 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@179 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:21.488 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@179 -- # VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:21.488 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@180 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:21.488 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@180 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:21.488 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@183 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:08:21.488 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@183 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:08:21.488 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@187 -- # export PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:08:21.488 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@187 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:08:21.488 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@191 -- # export PYTHONDONTWRITEBYTECODE=1 00:08:21.488 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@191 -- # PYTHONDONTWRITEBYTECODE=1 00:08:21.488 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@195 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:08:21.488 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@195 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:08:21.488 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@196 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:08:21.488 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@196 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:08:21.488 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@200 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:08:21.488 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@201 -- # rm -rf /var/tmp/asan_suppression_file 00:08:21.489 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@202 -- # cat 00:08:21.489 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@238 -- # echo leak:libfuse3.so 00:08:21.489 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@240 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:08:21.489 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@240 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:08:21.489 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@242 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:08:21.489 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@242 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:08:21.489 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@244 -- # '[' -z /var/spdk/dependencies ']' 00:08:21.489 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@247 -- # export DEPENDENCY_DIR 00:08:21.489 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@251 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:08:21.489 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@251 -- # SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:08:21.489 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@252 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:08:21.489 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@252 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:08:21.489 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@255 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:21.489 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@255 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:21.489 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@256 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:21.489 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@256 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:21.489 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@258 -- # export AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:08:21.489 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@258 -- # AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:08:21.489 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@261 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:21.489 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@261 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:21.489 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@264 -- # '[' 0 -eq 0 ']' 00:08:21.489 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@265 -- # export valgrind= 00:08:21.489 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@265 -- # valgrind= 00:08:21.489 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@271 -- # uname -s 00:08:21.489 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@271 -- # '[' Linux = Linux ']' 00:08:21.489 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@272 -- # HUGEMEM=4096 00:08:21.489 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@273 -- # export CLEAR_HUGE=yes 00:08:21.489 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@273 -- # CLEAR_HUGE=yes 00:08:21.489 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@274 -- # [[ 0 -eq 1 ]] 00:08:21.489 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@274 -- # [[ 0 -eq 1 ]] 00:08:21.489 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@281 -- # MAKE=make 00:08:21.489 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@282 -- # MAKEFLAGS=-j88 00:08:21.489 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@298 -- # export HUGEMEM=4096 00:08:21.489 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@298 -- # HUGEMEM=4096 00:08:21.489 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@300 -- # NO_HUGE=() 00:08:21.489 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@301 -- # TEST_MODE= 00:08:21.489 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@320 -- # [[ -z 489003 ]] 00:08:21.489 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@320 -- # kill -0 489003 00:08:21.489 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:08:21.489 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@330 -- # [[ -v testdir ]] 00:08:21.489 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@332 -- # local requested_size=2147483648 00:08:21.489 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@333 -- # local mount target_dir 00:08:21.489 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@335 -- # local -A mounts fss sizes avails uses 00:08:21.489 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@336 -- # local source fs size avail mount use 00:08:21.489 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@338 -- # local storage_fallback storage_candidates 00:08:21.489 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@340 -- # mktemp -udt spdk.XXXXXX 00:08:21.489 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@340 -- # storage_fallback=/tmp/spdk.VQlXhe 00:08:21.489 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@345 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:08:21.489 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@347 -- # [[ -n '' ]] 00:08:21.489 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@352 -- # [[ -n '' ]] 00:08:21.489 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@357 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio /tmp/spdk.VQlXhe/tests/vfio /tmp/spdk.VQlXhe 00:08:21.489 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@360 -- # requested_size=2214592512 00:08:21.489 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:08:21.489 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@329 -- # df -T 00:08:21.489 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@329 -- # grep -v Filesystem 00:08:21.489 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@363 -- # mounts["$mount"]=spdk_devtmpfs 00:08:21.489 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@363 -- # fss["$mount"]=devtmpfs 00:08:21.489 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@364 -- # avails["$mount"]=67108864 00:08:21.489 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@364 -- # sizes["$mount"]=67108864 00:08:21.489 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@365 -- # uses["$mount"]=0 00:08:21.489 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:08:21.489 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@363 -- # mounts["$mount"]=/dev/pmem0 00:08:21.489 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@363 -- # fss["$mount"]=ext2 00:08:21.489 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@364 -- # avails["$mount"]=948682752 00:08:21.489 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@364 -- # sizes["$mount"]=5284429824 00:08:21.489 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@365 -- # uses["$mount"]=4335747072 00:08:21.489 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:08:21.489 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@363 -- # mounts["$mount"]=spdk_root 00:08:21.489 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@363 -- # fss["$mount"]=overlay 00:08:21.489 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@364 -- # avails["$mount"]=54590947328 00:08:21.489 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@364 -- # sizes["$mount"]=61742043136 00:08:21.489 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@365 -- # uses["$mount"]=7151095808 00:08:21.489 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:08:21.489 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:08:21.489 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:08:21.489 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@364 -- # avails["$mount"]=30867644416 00:08:21.489 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@364 -- # sizes["$mount"]=30871019520 00:08:21.489 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@365 -- # uses["$mount"]=3375104 00:08:21.489 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:08:21.489 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:08:21.489 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:08:21.489 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@364 -- # avails["$mount"]=12342374400 00:08:21.489 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@364 -- # sizes["$mount"]=12348411904 00:08:21.489 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@365 -- # uses["$mount"]=6037504 00:08:21.489 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:08:21.489 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:08:21.489 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:08:21.489 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@364 -- # avails["$mount"]=30870708224 00:08:21.489 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@364 -- # sizes["$mount"]=30871023616 00:08:21.489 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@365 -- # uses["$mount"]=315392 00:08:21.489 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:08:21.489 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:08:21.490 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:08:21.490 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@364 -- # avails["$mount"]=6174199808 00:08:21.490 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@364 -- # sizes["$mount"]=6174203904 00:08:21.490 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@365 -- # uses["$mount"]=4096 00:08:21.490 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:08:21.490 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@368 -- # printf '* Looking for test storage...\n' 00:08:21.490 * Looking for test storage... 00:08:21.490 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@370 -- # local target_space new_size 00:08:21.490 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@371 -- # for target_dir in "${storage_candidates[@]}" 00:08:21.490 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # df /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:08:21.490 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # awk '$1 !~ /Filesystem/{print $6}' 00:08:21.490 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # mount=/ 00:08:21.490 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@376 -- # target_space=54590947328 00:08:21.490 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@377 -- # (( target_space == 0 || target_space < requested_size )) 00:08:21.490 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@380 -- # (( target_space >= requested_size )) 00:08:21.490 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@382 -- # [[ overlay == tmpfs ]] 00:08:21.490 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@382 -- # [[ overlay == ramfs ]] 00:08:21.490 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@382 -- # [[ / == / ]] 00:08:21.490 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@383 -- # new_size=9365688320 00:08:21.490 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@384 -- # (( new_size * 100 / sizes[/] > 95 )) 00:08:21.490 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@389 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:08:21.490 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@389 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:08:21.490 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@390 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:08:21.490 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:08:21.490 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@391 -- # return 0 00:08:21.490 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1682 -- # set -o errtrace 00:08:21.490 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:08:21.490 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:08:21.490 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:08:21.490 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1687 -- # true 00:08:21.490 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1689 -- # xtrace_fd 00:08:21.490 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:08:21.490 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:08:21.490 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@27 -- # exec 00:08:21.490 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@29 -- # exec 00:08:21.490 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@31 -- # xtrace_restore 00:08:21.490 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:08:21.490 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:08:21.490 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@18 -- # set -x 00:08:21.490 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@65 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/../common.sh 00:08:21.490 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@8 -- # pids=() 00:08:21.490 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@67 -- # fuzzfile=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c 00:08:21.490 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@68 -- # grep -c '\.fn =' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c 00:08:21.490 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@68 -- # fuzz_num=7 00:08:21.490 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@69 -- # (( fuzz_num != 0 )) 00:08:21.490 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@71 -- # trap 'cleanup /tmp/vfio-user-* /var/tmp/suppress_vfio_fuzz; exit 1' SIGINT SIGTERM EXIT 00:08:21.490 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@74 -- # mem_size=0 00:08:21.490 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@75 -- # [[ 1 -eq 1 ]] 00:08:21.490 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@76 -- # start_llvm_fuzz_short 7 1 00:08:21.490 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@69 -- # local fuzz_num=7 00:08:21.490 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@70 -- # local time=1 00:08:21.490 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i = 0 )) 00:08:21.490 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:21.490 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 0 1 0x1 00:08:21.490 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=0 00:08:21.490 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:08:21.490 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:08:21.490 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 00:08:21.490 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-0 00:08:21.490 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-0/domain/1 00:08:21.490 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-0/domain/2 00:08:21.490 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-0/fuzz_vfio_json.conf 00:08:21.490 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:08:21.490 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:08:21.490 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-0 /tmp/vfio-user-0/domain/1 /tmp/vfio-user-0/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 00:08:21.490 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-0/domain/1%; 00:08:21.490 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-0/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:08:21.490 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:21.490 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:08:21.490 22:47:19 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-0/domain/1 -c /tmp/vfio-user-0/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 -Y /tmp/vfio-user-0/domain/2 -r /tmp/vfio-user-0/spdk0.sock -Z 0 00:08:21.490 [2024-07-24 22:47:19.627664] Starting SPDK v24.09-pre git sha1 f41dbc235 / DPDK 24.03.0 initialization... 00:08:21.490 [2024-07-24 22:47:19.627744] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid489164 ] 00:08:21.490 EAL: No free 2048 kB hugepages reported on node 1 00:08:21.750 [2024-07-24 22:47:19.700720] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:21.750 [2024-07-24 22:47:19.773359] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:22.009 INFO: Running with entropic power schedule (0xFF, 100). 00:08:22.009 INFO: Seed: 1733902389 00:08:22.009 INFO: Loaded 1 modules (356221 inline 8-bit counters): 356221 [0x2986a0c, 0x29dd989), 00:08:22.009 INFO: Loaded 1 PC tables (356221 PCs): 356221 [0x29dd990,0x2f4d160), 00:08:22.009 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 00:08:22.009 INFO: A corpus is not provided, starting from an empty corpus 00:08:22.009 #2 INITED exec/s: 0 rss: 65Mb 00:08:22.009 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:22.009 This may also happen if the target rejected all inputs we tried so far 00:08:22.009 [2024-07-24 22:47:20.030277] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-0/domain/2: enabling controller 00:08:22.268 NEW_FUNC[1/659]: 0x4838a0 in fuzz_vfio_user_region_rw /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:84 00:08:22.268 NEW_FUNC[2/659]: 0x4893b0 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:08:22.268 #10 NEW cov: 10982 ft: 10807 corp: 2/7b lim: 6 exec/s: 0 rss: 70Mb L: 6/6 MS: 3 InsertRepeatedBytes-InsertByte-InsertByte- 00:08:22.268 #11 NEW cov: 10996 ft: 14103 corp: 3/13b lim: 6 exec/s: 0 rss: 72Mb L: 6/6 MS: 1 ShuffleBytes- 00:08:22.527 #17 NEW cov: 10996 ft: 14977 corp: 4/19b lim: 6 exec/s: 0 rss: 72Mb L: 6/6 MS: 1 ChangeByte- 00:08:22.527 #18 NEW cov: 10996 ft: 15105 corp: 5/25b lim: 6 exec/s: 0 rss: 72Mb L: 6/6 MS: 1 ChangeBit- 00:08:22.787 #24 NEW cov: 11003 ft: 16339 corp: 6/31b lim: 6 exec/s: 0 rss: 72Mb L: 6/6 MS: 1 ShuffleBytes- 00:08:22.787 NEW_FUNC[1/1]: 0x1a54d00 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:08:22.787 #27 NEW cov: 11020 ft: 16443 corp: 7/37b lim: 6 exec/s: 0 rss: 73Mb L: 6/6 MS: 3 EraseBytes-InsertByte-CopyPart- 00:08:23.045 #33 NEW cov: 11020 ft: 16951 corp: 8/43b lim: 6 exec/s: 33 rss: 73Mb L: 6/6 MS: 1 ChangeBit- 00:08:23.045 #34 NEW cov: 11020 ft: 17100 corp: 9/49b lim: 6 exec/s: 34 rss: 73Mb L: 6/6 MS: 1 CrossOver- 00:08:23.304 #35 NEW cov: 11020 ft: 17290 corp: 10/55b lim: 6 exec/s: 35 rss: 73Mb L: 6/6 MS: 1 ShuffleBytes- 00:08:23.304 #36 NEW cov: 11020 ft: 17404 corp: 11/61b lim: 6 exec/s: 36 rss: 73Mb L: 6/6 MS: 1 ChangeByte- 00:08:23.563 #39 NEW cov: 11020 ft: 18194 corp: 12/67b lim: 6 exec/s: 39 rss: 73Mb L: 6/6 MS: 3 EraseBytes-ChangeBinInt-CrossOver- 00:08:23.563 #40 NEW cov: 11020 ft: 18303 corp: 13/73b lim: 6 exec/s: 40 rss: 73Mb L: 6/6 MS: 1 ChangeBinInt- 00:08:23.822 #46 NEW cov: 11020 ft: 18671 corp: 14/79b lim: 6 exec/s: 46 rss: 73Mb L: 6/6 MS: 1 ShuffleBytes- 00:08:23.822 #47 NEW cov: 11027 ft: 18968 corp: 15/85b lim: 6 exec/s: 47 rss: 73Mb L: 6/6 MS: 1 ChangeBinInt- 00:08:24.082 #48 NEW cov: 11027 ft: 19245 corp: 16/91b lim: 6 exec/s: 24 rss: 74Mb L: 6/6 MS: 1 ChangeBinInt- 00:08:24.082 #48 DONE cov: 11027 ft: 19245 corp: 16/91b lim: 6 exec/s: 24 rss: 74Mb 00:08:24.082 Done 48 runs in 2 second(s) 00:08:24.082 [2024-07-24 22:47:22.052273] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-0/domain/2: disabling controller 00:08:24.341 22:47:22 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-0 /var/tmp/suppress_vfio_fuzz 00:08:24.341 22:47:22 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:24.341 22:47:22 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:24.341 22:47:22 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 1 1 0x1 00:08:24.341 22:47:22 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=1 00:08:24.341 22:47:22 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:08:24.341 22:47:22 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:08:24.341 22:47:22 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 00:08:24.341 22:47:22 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-1 00:08:24.341 22:47:22 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-1/domain/1 00:08:24.341 22:47:22 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-1/domain/2 00:08:24.341 22:47:22 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-1/fuzz_vfio_json.conf 00:08:24.341 22:47:22 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:08:24.341 22:47:22 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:08:24.341 22:47:22 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-1 /tmp/vfio-user-1/domain/1 /tmp/vfio-user-1/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 00:08:24.341 22:47:22 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-1/domain/1%; 00:08:24.341 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-1/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:08:24.341 22:47:22 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:24.341 22:47:22 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:08:24.341 22:47:22 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-1/domain/1 -c /tmp/vfio-user-1/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 -Y /tmp/vfio-user-1/domain/2 -r /tmp/vfio-user-1/spdk1.sock -Z 1 00:08:24.341 [2024-07-24 22:47:22.336183] Starting SPDK v24.09-pre git sha1 f41dbc235 / DPDK 24.03.0 initialization... 00:08:24.341 [2024-07-24 22:47:22.336242] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid489621 ] 00:08:24.341 EAL: No free 2048 kB hugepages reported on node 1 00:08:24.341 [2024-07-24 22:47:22.406795] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:24.341 [2024-07-24 22:47:22.478989] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:24.600 INFO: Running with entropic power schedule (0xFF, 100). 00:08:24.600 INFO: Seed: 135917645 00:08:24.600 INFO: Loaded 1 modules (356221 inline 8-bit counters): 356221 [0x2986a0c, 0x29dd989), 00:08:24.600 INFO: Loaded 1 PC tables (356221 PCs): 356221 [0x29dd990,0x2f4d160), 00:08:24.600 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 00:08:24.600 INFO: A corpus is not provided, starting from an empty corpus 00:08:24.600 #2 INITED exec/s: 0 rss: 66Mb 00:08:24.600 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:24.600 This may also happen if the target rejected all inputs we tried so far 00:08:24.600 [2024-07-24 22:47:22.715766] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-1/domain/2: enabling controller 00:08:24.600 [2024-07-24 22:47:22.739127] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:24.600 [2024-07-24 22:47:22.739152] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:24.600 [2024-07-24 22:47:22.739170] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:24.859 NEW_FUNC[1/661]: 0x483e40 in fuzz_vfio_user_version /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:71 00:08:24.859 NEW_FUNC[2/661]: 0x4893b0 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:08:24.859 #35 NEW cov: 10972 ft: 10812 corp: 2/5b lim: 4 exec/s: 0 rss: 71Mb L: 4/4 MS: 3 ChangeBinInt-CopyPart-CopyPart- 00:08:24.859 [2024-07-24 22:47:22.991313] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:24.859 [2024-07-24 22:47:22.991355] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:24.859 [2024-07-24 22:47:22.991370] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:25.118 #44 NEW cov: 10989 ft: 13920 corp: 3/9b lim: 4 exec/s: 0 rss: 72Mb L: 4/4 MS: 4 EraseBytes-ChangeBit-CopyPart-InsertByte- 00:08:25.118 [2024-07-24 22:47:23.118563] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:25.118 [2024-07-24 22:47:23.118588] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:25.118 [2024-07-24 22:47:23.118604] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:25.118 #50 NEW cov: 10989 ft: 14432 corp: 4/13b lim: 4 exec/s: 0 rss: 73Mb L: 4/4 MS: 1 ChangeByte- 00:08:25.118 [2024-07-24 22:47:23.245659] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:25.118 [2024-07-24 22:47:23.245682] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:25.118 [2024-07-24 22:47:23.245697] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:25.118 #51 NEW cov: 10989 ft: 14852 corp: 5/17b lim: 4 exec/s: 0 rss: 74Mb L: 4/4 MS: 1 ChangeByte- 00:08:25.376 [2024-07-24 22:47:23.363899] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:25.376 [2024-07-24 22:47:23.363925] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:25.376 [2024-07-24 22:47:23.363941] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:25.376 #52 NEW cov: 10989 ft: 15274 corp: 6/21b lim: 4 exec/s: 0 rss: 74Mb L: 4/4 MS: 1 CopyPart- 00:08:25.376 [2024-07-24 22:47:23.492127] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:25.376 [2024-07-24 22:47:23.492152] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:25.376 [2024-07-24 22:47:23.492168] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:25.376 NEW_FUNC[1/1]: 0x1a54d00 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:08:25.376 #53 NEW cov: 11006 ft: 15410 corp: 7/25b lim: 4 exec/s: 0 rss: 74Mb L: 4/4 MS: 1 CopyPart- 00:08:25.634 [2024-07-24 22:47:23.609356] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:25.634 [2024-07-24 22:47:23.609380] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:25.634 [2024-07-24 22:47:23.609395] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:25.634 #59 NEW cov: 11006 ft: 16168 corp: 8/29b lim: 4 exec/s: 59 rss: 74Mb L: 4/4 MS: 1 ChangeBit- 00:08:25.634 [2024-07-24 22:47:23.726253] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:25.634 [2024-07-24 22:47:23.726276] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:25.634 [2024-07-24 22:47:23.726291] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:25.634 #63 NEW cov: 11006 ft: 16249 corp: 9/33b lim: 4 exec/s: 63 rss: 74Mb L: 4/4 MS: 4 EraseBytes-ChangeBit-CMP-InsertByte- DE: "\017\000"- 00:08:25.893 [2024-07-24 22:47:23.843357] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:25.893 [2024-07-24 22:47:23.843381] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:25.893 [2024-07-24 22:47:23.843401] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:25.893 #64 NEW cov: 11006 ft: 16527 corp: 10/37b lim: 4 exec/s: 64 rss: 74Mb L: 4/4 MS: 1 ChangeByte- 00:08:25.893 [2024-07-24 22:47:23.960420] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:25.893 [2024-07-24 22:47:23.960443] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:25.893 [2024-07-24 22:47:23.960459] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:25.893 #70 NEW cov: 11006 ft: 16642 corp: 11/41b lim: 4 exec/s: 70 rss: 74Mb L: 4/4 MS: 1 CrossOver- 00:08:25.893 [2024-07-24 22:47:24.087463] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:25.893 [2024-07-24 22:47:24.087486] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:25.893 [2024-07-24 22:47:24.087501] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:26.152 #71 NEW cov: 11006 ft: 16874 corp: 12/45b lim: 4 exec/s: 71 rss: 74Mb L: 4/4 MS: 1 ShuffleBytes- 00:08:26.152 [2024-07-24 22:47:24.204538] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:26.152 [2024-07-24 22:47:24.204562] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:26.152 [2024-07-24 22:47:24.204577] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:26.152 #77 NEW cov: 11006 ft: 16962 corp: 13/49b lim: 4 exec/s: 77 rss: 74Mb L: 4/4 MS: 1 ShuffleBytes- 00:08:26.152 [2024-07-24 22:47:24.331765] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:26.152 [2024-07-24 22:47:24.331790] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:26.152 [2024-07-24 22:47:24.331804] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:26.409 #83 NEW cov: 11006 ft: 17452 corp: 14/53b lim: 4 exec/s: 83 rss: 74Mb L: 4/4 MS: 1 CMP- DE: "\201\000\000\000"- 00:08:26.409 [2024-07-24 22:47:24.459817] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:26.409 [2024-07-24 22:47:24.459841] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:26.409 [2024-07-24 22:47:24.459857] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:26.409 #89 NEW cov: 11013 ft: 17516 corp: 15/57b lim: 4 exec/s: 89 rss: 74Mb L: 4/4 MS: 1 ChangeBit- 00:08:26.409 [2024-07-24 22:47:24.576956] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:26.409 [2024-07-24 22:47:24.576979] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:26.409 [2024-07-24 22:47:24.576994] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:26.666 #90 NEW cov: 11013 ft: 17557 corp: 16/61b lim: 4 exec/s: 90 rss: 74Mb L: 4/4 MS: 1 CopyPart- 00:08:26.666 [2024-07-24 22:47:24.729363] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:26.666 [2024-07-24 22:47:24.729386] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:26.666 [2024-07-24 22:47:24.729402] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:26.666 #91 NEW cov: 11013 ft: 18266 corp: 17/65b lim: 4 exec/s: 45 rss: 74Mb L: 4/4 MS: 1 PersAutoDict- DE: "\017\000"- 00:08:26.666 #91 DONE cov: 11013 ft: 18266 corp: 17/65b lim: 4 exec/s: 45 rss: 74Mb 00:08:26.666 ###### Recommended dictionary. ###### 00:08:26.666 "\017\000" # Uses: 2 00:08:26.666 "\201\000\000\000" # Uses: 0 00:08:26.666 ###### End of recommended dictionary. ###### 00:08:26.666 Done 91 runs in 2 second(s) 00:08:26.666 [2024-07-24 22:47:24.855263] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-1/domain/2: disabling controller 00:08:26.924 22:47:25 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-1 /var/tmp/suppress_vfio_fuzz 00:08:26.924 22:47:25 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:26.924 22:47:25 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:26.924 22:47:25 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 2 1 0x1 00:08:26.924 22:47:25 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=2 00:08:26.924 22:47:25 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:08:26.924 22:47:25 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:08:26.924 22:47:25 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 00:08:26.924 22:47:25 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-2 00:08:26.924 22:47:25 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-2/domain/1 00:08:26.924 22:47:25 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-2/domain/2 00:08:26.924 22:47:25 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-2/fuzz_vfio_json.conf 00:08:26.924 22:47:25 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:08:26.924 22:47:25 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:08:26.924 22:47:25 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-2 /tmp/vfio-user-2/domain/1 /tmp/vfio-user-2/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 00:08:26.924 22:47:25 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-2/domain/1%; 00:08:26.924 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-2/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:08:26.924 22:47:25 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:26.924 22:47:25 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:08:26.924 22:47:25 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-2/domain/1 -c /tmp/vfio-user-2/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 -Y /tmp/vfio-user-2/domain/2 -r /tmp/vfio-user-2/spdk2.sock -Z 2 00:08:27.183 [2024-07-24 22:47:25.133532] Starting SPDK v24.09-pre git sha1 f41dbc235 / DPDK 24.03.0 initialization... 00:08:27.183 [2024-07-24 22:47:25.133595] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid490082 ] 00:08:27.183 EAL: No free 2048 kB hugepages reported on node 1 00:08:27.183 [2024-07-24 22:47:25.204832] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:27.183 [2024-07-24 22:47:25.277431] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:27.440 INFO: Running with entropic power schedule (0xFF, 100). 00:08:27.440 INFO: Seed: 2932923344 00:08:27.440 INFO: Loaded 1 modules (356221 inline 8-bit counters): 356221 [0x2986a0c, 0x29dd989), 00:08:27.440 INFO: Loaded 1 PC tables (356221 PCs): 356221 [0x29dd990,0x2f4d160), 00:08:27.440 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 00:08:27.440 INFO: A corpus is not provided, starting from an empty corpus 00:08:27.440 #2 INITED exec/s: 0 rss: 64Mb 00:08:27.440 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:27.440 This may also happen if the target rejected all inputs we tried so far 00:08:27.440 [2024-07-24 22:47:25.512795] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-2/domain/2: enabling controller 00:08:27.441 [2024-07-24 22:47:25.535918] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:08:27.698 NEW_FUNC[1/660]: 0x484820 in fuzz_vfio_user_get_region_info /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:103 00:08:27.698 NEW_FUNC[2/660]: 0x4893b0 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:08:27.698 #12 NEW cov: 10955 ft: 10740 corp: 2/9b lim: 8 exec/s: 0 rss: 71Mb L: 8/8 MS: 5 ChangeBinInt-InsertByte-ChangeBit-CrossOver-InsertRepeatedBytes- 00:08:27.699 [2024-07-24 22:47:25.786706] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:08:27.699 #23 NEW cov: 10969 ft: 13761 corp: 3/17b lim: 8 exec/s: 0 rss: 72Mb L: 8/8 MS: 1 ChangeByte- 00:08:27.956 [2024-07-24 22:47:25.913558] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:08:27.956 #26 NEW cov: 10972 ft: 15210 corp: 4/25b lim: 8 exec/s: 0 rss: 73Mb L: 8/8 MS: 3 InsertRepeatedBytes-CrossOver-CopyPart- 00:08:27.956 [2024-07-24 22:47:26.050154] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:08:27.956 #28 NEW cov: 10972 ft: 15629 corp: 5/33b lim: 8 exec/s: 0 rss: 73Mb L: 8/8 MS: 2 EraseBytes-CrossOver- 00:08:28.215 [2024-07-24 22:47:26.166713] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:08:28.215 #29 NEW cov: 10972 ft: 16023 corp: 6/41b lim: 8 exec/s: 0 rss: 73Mb L: 8/8 MS: 1 ShuffleBytes- 00:08:28.215 [2024-07-24 22:47:26.292367] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:08:28.215 NEW_FUNC[1/1]: 0x1a54d00 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:08:28.215 #30 NEW cov: 10989 ft: 16231 corp: 7/49b lim: 8 exec/s: 0 rss: 73Mb L: 8/8 MS: 1 ChangeByte- 00:08:28.215 [2024-07-24 22:47:26.409528] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:08:28.473 #31 NEW cov: 10989 ft: 16283 corp: 8/57b lim: 8 exec/s: 31 rss: 73Mb L: 8/8 MS: 1 ChangeBinInt- 00:08:28.473 [2024-07-24 22:47:26.525297] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:08:28.473 #32 NEW cov: 10989 ft: 16531 corp: 9/65b lim: 8 exec/s: 32 rss: 73Mb L: 8/8 MS: 1 ChangeBit- 00:08:28.473 [2024-07-24 22:47:26.641845] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:08:28.731 #33 NEW cov: 10989 ft: 16686 corp: 10/73b lim: 8 exec/s: 33 rss: 73Mb L: 8/8 MS: 1 ShuffleBytes- 00:08:28.731 [2024-07-24 22:47:26.768485] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:08:28.731 #34 NEW cov: 10989 ft: 16936 corp: 11/81b lim: 8 exec/s: 34 rss: 73Mb L: 8/8 MS: 1 ChangeByte- 00:08:28.732 [2024-07-24 22:47:26.884775] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:08:28.990 #35 NEW cov: 10989 ft: 17040 corp: 12/89b lim: 8 exec/s: 35 rss: 73Mb L: 8/8 MS: 1 ChangeBinInt- 00:08:28.990 [2024-07-24 22:47:27.011002] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:08:28.990 #36 NEW cov: 10989 ft: 17166 corp: 13/97b lim: 8 exec/s: 36 rss: 73Mb L: 8/8 MS: 1 CMP- DE: "\002\000"- 00:08:28.990 [2024-07-24 22:47:27.126984] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:08:29.248 #40 NEW cov: 10989 ft: 17207 corp: 14/105b lim: 8 exec/s: 40 rss: 74Mb L: 8/8 MS: 4 InsertRepeatedBytes-ChangeByte-CMP-CrossOver- DE: "\001\003"- 00:08:29.248 [2024-07-24 22:47:27.253181] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:08:29.248 #41 NEW cov: 10996 ft: 17274 corp: 15/113b lim: 8 exec/s: 41 rss: 74Mb L: 8/8 MS: 1 ShuffleBytes- 00:08:29.248 [2024-07-24 22:47:27.370009] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:08:29.248 #42 NEW cov: 10996 ft: 17498 corp: 16/121b lim: 8 exec/s: 42 rss: 74Mb L: 8/8 MS: 1 ChangeByte- 00:08:29.507 [2024-07-24 22:47:27.496325] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:08:29.507 #43 NEW cov: 10996 ft: 17526 corp: 17/129b lim: 8 exec/s: 21 rss: 74Mb L: 8/8 MS: 1 ChangeBit- 00:08:29.507 #43 DONE cov: 10996 ft: 17526 corp: 17/129b lim: 8 exec/s: 21 rss: 74Mb 00:08:29.507 ###### Recommended dictionary. ###### 00:08:29.507 "\002\000" # Uses: 0 00:08:29.507 "\001\003" # Uses: 0 00:08:29.507 ###### End of recommended dictionary. ###### 00:08:29.507 Done 43 runs in 2 second(s) 00:08:29.507 [2024-07-24 22:47:27.591263] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-2/domain/2: disabling controller 00:08:29.766 22:47:27 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-2 /var/tmp/suppress_vfio_fuzz 00:08:29.766 22:47:27 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:29.766 22:47:27 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:29.766 22:47:27 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 3 1 0x1 00:08:29.766 22:47:27 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=3 00:08:29.766 22:47:27 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:08:29.766 22:47:27 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:08:29.766 22:47:27 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 00:08:29.766 22:47:27 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-3 00:08:29.766 22:47:27 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-3/domain/1 00:08:29.766 22:47:27 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-3/domain/2 00:08:29.766 22:47:27 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-3/fuzz_vfio_json.conf 00:08:29.766 22:47:27 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:08:29.766 22:47:27 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:08:29.766 22:47:27 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-3 /tmp/vfio-user-3/domain/1 /tmp/vfio-user-3/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 00:08:29.766 22:47:27 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-3/domain/1%; 00:08:29.766 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-3/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:08:29.766 22:47:27 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:29.766 22:47:27 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:08:29.766 22:47:27 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-3/domain/1 -c /tmp/vfio-user-3/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 -Y /tmp/vfio-user-3/domain/2 -r /tmp/vfio-user-3/spdk3.sock -Z 3 00:08:29.766 [2024-07-24 22:47:27.871288] Starting SPDK v24.09-pre git sha1 f41dbc235 / DPDK 24.03.0 initialization... 00:08:29.766 [2024-07-24 22:47:27.871371] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid490521 ] 00:08:29.766 EAL: No free 2048 kB hugepages reported on node 1 00:08:29.766 [2024-07-24 22:47:27.943137] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:30.024 [2024-07-24 22:47:28.016802] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:30.024 INFO: Running with entropic power schedule (0xFF, 100). 00:08:30.024 INFO: Seed: 1381959109 00:08:30.024 INFO: Loaded 1 modules (356221 inline 8-bit counters): 356221 [0x2986a0c, 0x29dd989), 00:08:30.024 INFO: Loaded 1 PC tables (356221 PCs): 356221 [0x29dd990,0x2f4d160), 00:08:30.024 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 00:08:30.024 INFO: A corpus is not provided, starting from an empty corpus 00:08:30.024 #2 INITED exec/s: 0 rss: 65Mb 00:08:30.024 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:30.024 This may also happen if the target rejected all inputs we tried so far 00:08:30.283 [2024-07-24 22:47:28.260685] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-3/domain/2: enabling controller 00:08:30.542 NEW_FUNC[1/660]: 0x484f00 in fuzz_vfio_user_dma_map /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:124 00:08:30.542 NEW_FUNC[2/660]: 0x4893b0 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:08:30.542 #50 NEW cov: 10967 ft: 10665 corp: 2/33b lim: 32 exec/s: 0 rss: 72Mb L: 32/32 MS: 3 CrossOver-InsertRepeatedBytes-CopyPart- 00:08:30.542 #64 NEW cov: 10984 ft: 13793 corp: 3/65b lim: 32 exec/s: 0 rss: 73Mb L: 32/32 MS: 4 EraseBytes-ShuffleBytes-ChangeBit-CopyPart- 00:08:30.800 #70 NEW cov: 10984 ft: 15958 corp: 4/97b lim: 32 exec/s: 0 rss: 74Mb L: 32/32 MS: 1 ChangeBinInt- 00:08:31.058 NEW_FUNC[1/1]: 0x1a54d00 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:08:31.058 #71 NEW cov: 11001 ft: 16405 corp: 5/129b lim: 32 exec/s: 0 rss: 74Mb L: 32/32 MS: 1 ChangeBit- 00:08:31.317 #72 NEW cov: 11001 ft: 17002 corp: 6/161b lim: 32 exec/s: 72 rss: 74Mb L: 32/32 MS: 1 ShuffleBytes- 00:08:31.317 #73 NEW cov: 11001 ft: 17421 corp: 7/193b lim: 32 exec/s: 73 rss: 74Mb L: 32/32 MS: 1 ChangeBit- 00:08:31.576 #92 NEW cov: 11001 ft: 17515 corp: 8/225b lim: 32 exec/s: 92 rss: 74Mb L: 32/32 MS: 4 CrossOver-CrossOver-ChangeBit-InsertRepeatedBytes- 00:08:31.834 #103 NEW cov: 11001 ft: 17650 corp: 9/257b lim: 32 exec/s: 103 rss: 74Mb L: 32/32 MS: 1 CrossOver- 00:08:31.834 #104 NEW cov: 11001 ft: 17852 corp: 10/289b lim: 32 exec/s: 104 rss: 74Mb L: 32/32 MS: 1 ChangeBinInt- 00:08:32.092 #105 NEW cov: 11008 ft: 18178 corp: 11/321b lim: 32 exec/s: 105 rss: 74Mb L: 32/32 MS: 1 ChangeByte- 00:08:32.351 #106 NEW cov: 11008 ft: 18565 corp: 12/353b lim: 32 exec/s: 53 rss: 74Mb L: 32/32 MS: 1 CrossOver- 00:08:32.351 #106 DONE cov: 11008 ft: 18565 corp: 12/353b lim: 32 exec/s: 53 rss: 74Mb 00:08:32.351 Done 106 runs in 2 second(s) 00:08:32.351 [2024-07-24 22:47:30.413268] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-3/domain/2: disabling controller 00:08:32.610 22:47:30 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-3 /var/tmp/suppress_vfio_fuzz 00:08:32.610 22:47:30 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:32.610 22:47:30 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:32.610 22:47:30 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 4 1 0x1 00:08:32.610 22:47:30 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=4 00:08:32.610 22:47:30 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:08:32.610 22:47:30 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:08:32.610 22:47:30 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 00:08:32.611 22:47:30 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-4 00:08:32.611 22:47:30 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-4/domain/1 00:08:32.611 22:47:30 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-4/domain/2 00:08:32.611 22:47:30 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-4/fuzz_vfio_json.conf 00:08:32.611 22:47:30 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:08:32.611 22:47:30 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:08:32.611 22:47:30 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-4 /tmp/vfio-user-4/domain/1 /tmp/vfio-user-4/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 00:08:32.611 22:47:30 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-4/domain/1%; 00:08:32.611 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-4/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:08:32.611 22:47:30 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:32.611 22:47:30 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:08:32.611 22:47:30 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-4/domain/1 -c /tmp/vfio-user-4/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 -Y /tmp/vfio-user-4/domain/2 -r /tmp/vfio-user-4/spdk4.sock -Z 4 00:08:32.611 [2024-07-24 22:47:30.700734] Starting SPDK v24.09-pre git sha1 f41dbc235 / DPDK 24.03.0 initialization... 00:08:32.611 [2024-07-24 22:47:30.700796] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid490959 ] 00:08:32.611 EAL: No free 2048 kB hugepages reported on node 1 00:08:32.611 [2024-07-24 22:47:30.771608] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:32.870 [2024-07-24 22:47:30.847616] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:32.870 INFO: Running with entropic power schedule (0xFF, 100). 00:08:32.870 INFO: Seed: 4214975640 00:08:32.870 INFO: Loaded 1 modules (356221 inline 8-bit counters): 356221 [0x2986a0c, 0x29dd989), 00:08:32.870 INFO: Loaded 1 PC tables (356221 PCs): 356221 [0x29dd990,0x2f4d160), 00:08:32.870 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 00:08:32.870 INFO: A corpus is not provided, starting from an empty corpus 00:08:32.870 #2 INITED exec/s: 0 rss: 66Mb 00:08:32.870 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:32.870 This may also happen if the target rejected all inputs we tried so far 00:08:33.127 [2024-07-24 22:47:31.091008] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-4/domain/2: enabling controller 00:08:33.386 NEW_FUNC[1/660]: 0x485780 in fuzz_vfio_user_dma_unmap /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:144 00:08:33.386 NEW_FUNC[2/660]: 0x4893b0 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:08:33.386 #50 NEW cov: 10967 ft: 10485 corp: 2/33b lim: 32 exec/s: 0 rss: 71Mb L: 32/32 MS: 3 ChangeByte-InsertRepeatedBytes-InsertRepeatedBytes- 00:08:33.386 #56 NEW cov: 10986 ft: 13871 corp: 3/65b lim: 32 exec/s: 0 rss: 72Mb L: 32/32 MS: 1 ChangeBinInt- 00:08:33.645 #62 NEW cov: 10986 ft: 15648 corp: 4/97b lim: 32 exec/s: 0 rss: 73Mb L: 32/32 MS: 1 ShuffleBytes- 00:08:33.903 NEW_FUNC[1/1]: 0x1a54d00 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:08:33.904 #63 NEW cov: 11003 ft: 16681 corp: 5/129b lim: 32 exec/s: 0 rss: 74Mb L: 32/32 MS: 1 ChangeBit- 00:08:34.162 #64 NEW cov: 11003 ft: 17018 corp: 6/161b lim: 32 exec/s: 64 rss: 74Mb L: 32/32 MS: 1 CopyPart- 00:08:34.162 #65 NEW cov: 11003 ft: 17520 corp: 7/193b lim: 32 exec/s: 65 rss: 74Mb L: 32/32 MS: 1 ChangeByte- 00:08:34.421 #66 NEW cov: 11003 ft: 17734 corp: 8/225b lim: 32 exec/s: 66 rss: 74Mb L: 32/32 MS: 1 CopyPart- 00:08:34.679 #67 NEW cov: 11003 ft: 17961 corp: 9/257b lim: 32 exec/s: 67 rss: 74Mb L: 32/32 MS: 1 ChangeBinInt- 00:08:34.679 #68 NEW cov: 11010 ft: 18099 corp: 10/289b lim: 32 exec/s: 68 rss: 74Mb L: 32/32 MS: 1 ChangeByte- 00:08:34.937 #74 NEW cov: 11010 ft: 18232 corp: 11/321b lim: 32 exec/s: 37 rss: 74Mb L: 32/32 MS: 1 ChangeByte- 00:08:34.938 #74 DONE cov: 11010 ft: 18232 corp: 11/321b lim: 32 exec/s: 37 rss: 74Mb 00:08:34.938 Done 74 runs in 2 second(s) 00:08:34.938 [2024-07-24 22:47:33.083265] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-4/domain/2: disabling controller 00:08:35.196 22:47:33 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-4 /var/tmp/suppress_vfio_fuzz 00:08:35.196 22:47:33 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:35.196 22:47:33 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:35.196 22:47:33 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 5 1 0x1 00:08:35.196 22:47:33 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=5 00:08:35.196 22:47:33 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:08:35.196 22:47:33 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:08:35.196 22:47:33 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 00:08:35.196 22:47:33 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-5 00:08:35.196 22:47:33 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-5/domain/1 00:08:35.196 22:47:33 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-5/domain/2 00:08:35.196 22:47:33 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-5/fuzz_vfio_json.conf 00:08:35.196 22:47:33 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:08:35.196 22:47:33 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:08:35.196 22:47:33 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-5 /tmp/vfio-user-5/domain/1 /tmp/vfio-user-5/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 00:08:35.196 22:47:33 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-5/domain/1%; 00:08:35.196 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-5/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:08:35.196 22:47:33 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:35.196 22:47:33 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:08:35.197 22:47:33 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-5/domain/1 -c /tmp/vfio-user-5/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 -Y /tmp/vfio-user-5/domain/2 -r /tmp/vfio-user-5/spdk5.sock -Z 5 00:08:35.197 [2024-07-24 22:47:33.366648] Starting SPDK v24.09-pre git sha1 f41dbc235 / DPDK 24.03.0 initialization... 00:08:35.197 [2024-07-24 22:47:33.366714] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid491390 ] 00:08:35.197 EAL: No free 2048 kB hugepages reported on node 1 00:08:35.453 [2024-07-24 22:47:33.438336] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:35.453 [2024-07-24 22:47:33.511624] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:35.712 INFO: Running with entropic power schedule (0xFF, 100). 00:08:35.712 INFO: Seed: 2578982273 00:08:35.712 INFO: Loaded 1 modules (356221 inline 8-bit counters): 356221 [0x2986a0c, 0x29dd989), 00:08:35.712 INFO: Loaded 1 PC tables (356221 PCs): 356221 [0x29dd990,0x2f4d160), 00:08:35.712 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 00:08:35.712 INFO: A corpus is not provided, starting from an empty corpus 00:08:35.712 #2 INITED exec/s: 0 rss: 66Mb 00:08:35.712 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:35.712 This may also happen if the target rejected all inputs we tried so far 00:08:35.712 [2024-07-24 22:47:33.751876] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-5/domain/2: enabling controller 00:08:35.712 [2024-07-24 22:47:33.826871] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:35.712 [2024-07-24 22:47:33.826905] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:35.971 NEW_FUNC[1/661]: 0x486180 in fuzz_vfio_user_irq_set /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:171 00:08:35.971 NEW_FUNC[2/661]: 0x4893b0 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:08:35.971 #35 NEW cov: 10981 ft: 10462 corp: 2/14b lim: 13 exec/s: 0 rss: 72Mb L: 13/13 MS: 3 CopyPart-InsertRepeatedBytes-CopyPart- 00:08:35.971 [2024-07-24 22:47:34.129702] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:35.971 [2024-07-24 22:47:34.129739] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:36.229 #36 NEW cov: 10995 ft: 13670 corp: 3/27b lim: 13 exec/s: 0 rss: 72Mb L: 13/13 MS: 1 ShuffleBytes- 00:08:36.229 [2024-07-24 22:47:34.312586] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:36.229 [2024-07-24 22:47:34.312614] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:36.229 #37 NEW cov: 10995 ft: 16070 corp: 4/40b lim: 13 exec/s: 0 rss: 73Mb L: 13/13 MS: 1 CopyPart- 00:08:36.487 [2024-07-24 22:47:34.514159] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:36.487 [2024-07-24 22:47:34.514186] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:36.487 NEW_FUNC[1/1]: 0x1a54d00 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:08:36.487 #43 NEW cov: 11012 ft: 16775 corp: 5/53b lim: 13 exec/s: 0 rss: 74Mb L: 13/13 MS: 1 ChangeByte- 00:08:36.746 [2024-07-24 22:47:34.715461] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:36.746 [2024-07-24 22:47:34.715486] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:36.746 #44 NEW cov: 11012 ft: 17301 corp: 6/66b lim: 13 exec/s: 44 rss: 74Mb L: 13/13 MS: 1 CopyPart- 00:08:36.746 [2024-07-24 22:47:34.906578] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:36.746 [2024-07-24 22:47:34.906604] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:37.004 #45 NEW cov: 11012 ft: 17841 corp: 7/79b lim: 13 exec/s: 45 rss: 74Mb L: 13/13 MS: 1 CMP- DE: "\020\000\000\000\000\000\000\000"- 00:08:37.004 [2024-07-24 22:47:35.099536] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:37.004 [2024-07-24 22:47:35.099562] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:37.004 #46 NEW cov: 11012 ft: 18014 corp: 8/92b lim: 13 exec/s: 46 rss: 74Mb L: 13/13 MS: 1 CopyPart- 00:08:37.263 [2024-07-24 22:47:35.278799] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:37.263 [2024-07-24 22:47:35.278826] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:37.263 #57 NEW cov: 11012 ft: 18130 corp: 9/105b lim: 13 exec/s: 57 rss: 74Mb L: 13/13 MS: 1 CrossOver- 00:08:37.263 [2024-07-24 22:47:35.461728] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:37.263 [2024-07-24 22:47:35.461755] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:37.521 #58 NEW cov: 11019 ft: 18317 corp: 10/118b lim: 13 exec/s: 58 rss: 74Mb L: 13/13 MS: 1 CopyPart- 00:08:37.521 [2024-07-24 22:47:35.642453] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:37.521 [2024-07-24 22:47:35.642481] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:37.779 #59 NEW cov: 11019 ft: 18512 corp: 11/131b lim: 13 exec/s: 29 rss: 74Mb L: 13/13 MS: 1 CrossOver- 00:08:37.779 #59 DONE cov: 11019 ft: 18512 corp: 11/131b lim: 13 exec/s: 29 rss: 74Mb 00:08:37.779 ###### Recommended dictionary. ###### 00:08:37.779 "\020\000\000\000\000\000\000\000" # Uses: 0 00:08:37.779 ###### End of recommended dictionary. ###### 00:08:37.779 Done 59 runs in 2 second(s) 00:08:37.779 [2024-07-24 22:47:35.767276] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-5/domain/2: disabling controller 00:08:38.038 22:47:36 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-5 /var/tmp/suppress_vfio_fuzz 00:08:38.038 22:47:36 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:38.038 22:47:36 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:38.038 22:47:36 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 6 1 0x1 00:08:38.038 22:47:36 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=6 00:08:38.038 22:47:36 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:08:38.038 22:47:36 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:08:38.038 22:47:36 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 00:08:38.038 22:47:36 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-6 00:08:38.038 22:47:36 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-6/domain/1 00:08:38.038 22:47:36 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-6/domain/2 00:08:38.038 22:47:36 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-6/fuzz_vfio_json.conf 00:08:38.038 22:47:36 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:08:38.038 22:47:36 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:08:38.038 22:47:36 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-6 /tmp/vfio-user-6/domain/1 /tmp/vfio-user-6/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 00:08:38.038 22:47:36 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-6/domain/1%; 00:08:38.038 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-6/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:08:38.038 22:47:36 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:38.038 22:47:36 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:08:38.038 22:47:36 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-6/domain/1 -c /tmp/vfio-user-6/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 -Y /tmp/vfio-user-6/domain/2 -r /tmp/vfio-user-6/spdk6.sock -Z 6 00:08:38.038 [2024-07-24 22:47:36.045415] Starting SPDK v24.09-pre git sha1 f41dbc235 / DPDK 24.03.0 initialization... 00:08:38.038 [2024-07-24 22:47:36.045478] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid491826 ] 00:08:38.039 EAL: No free 2048 kB hugepages reported on node 1 00:08:38.039 [2024-07-24 22:47:36.115952] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:38.039 [2024-07-24 22:47:36.188373] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:38.297 INFO: Running with entropic power schedule (0xFF, 100). 00:08:38.297 INFO: Seed: 963021300 00:08:38.297 INFO: Loaded 1 modules (356221 inline 8-bit counters): 356221 [0x2986a0c, 0x29dd989), 00:08:38.297 INFO: Loaded 1 PC tables (356221 PCs): 356221 [0x29dd990,0x2f4d160), 00:08:38.297 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 00:08:38.297 INFO: A corpus is not provided, starting from an empty corpus 00:08:38.297 #2 INITED exec/s: 0 rss: 66Mb 00:08:38.297 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:38.297 This may also happen if the target rejected all inputs we tried so far 00:08:38.297 [2024-07-24 22:47:36.430853] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-6/domain/2: enabling controller 00:08:38.556 [2024-07-24 22:47:36.506963] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:38.556 [2024-07-24 22:47:36.506995] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:38.556 NEW_FUNC[1/661]: 0x486e70 in fuzz_vfio_user_set_msix /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:190 00:08:38.556 NEW_FUNC[2/661]: 0x4893b0 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:08:38.556 #6 NEW cov: 10970 ft: 10538 corp: 2/10b lim: 9 exec/s: 0 rss: 71Mb L: 9/9 MS: 4 ChangeByte-ShuffleBytes-CrossOver-InsertRepeatedBytes- 00:08:38.814 [2024-07-24 22:47:36.814096] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:38.814 [2024-07-24 22:47:36.814137] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:38.814 #7 NEW cov: 10987 ft: 14123 corp: 3/19b lim: 9 exec/s: 0 rss: 72Mb L: 9/9 MS: 1 ChangeBit- 00:08:38.814 [2024-07-24 22:47:36.990914] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:38.814 [2024-07-24 22:47:36.990944] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:39.072 #28 NEW cov: 10987 ft: 15944 corp: 4/28b lim: 9 exec/s: 0 rss: 73Mb L: 9/9 MS: 1 ChangeByte- 00:08:39.072 [2024-07-24 22:47:37.189496] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:39.072 [2024-07-24 22:47:37.189525] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:39.331 NEW_FUNC[1/1]: 0x1a54d00 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:08:39.331 #29 NEW cov: 11004 ft: 16922 corp: 5/37b lim: 9 exec/s: 0 rss: 74Mb L: 9/9 MS: 1 ShuffleBytes- 00:08:39.331 [2024-07-24 22:47:37.381423] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:39.331 [2024-07-24 22:47:37.381451] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:39.331 #30 NEW cov: 11004 ft: 16957 corp: 6/46b lim: 9 exec/s: 30 rss: 74Mb L: 9/9 MS: 1 ChangeBinInt- 00:08:39.589 [2024-07-24 22:47:37.561854] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:39.589 [2024-07-24 22:47:37.561882] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:39.589 #31 NEW cov: 11004 ft: 17405 corp: 7/55b lim: 9 exec/s: 31 rss: 74Mb L: 9/9 MS: 1 CrossOver- 00:08:39.589 [2024-07-24 22:47:37.739376] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:39.589 [2024-07-24 22:47:37.739403] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:39.847 #32 NEW cov: 11004 ft: 17485 corp: 8/64b lim: 9 exec/s: 32 rss: 74Mb L: 9/9 MS: 1 ChangeBit- 00:08:39.847 [2024-07-24 22:47:37.923525] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:39.847 [2024-07-24 22:47:37.923551] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:39.847 #33 NEW cov: 11004 ft: 17876 corp: 9/73b lim: 9 exec/s: 33 rss: 74Mb L: 9/9 MS: 1 CrossOver- 00:08:40.106 [2024-07-24 22:47:38.108812] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:40.106 [2024-07-24 22:47:38.108839] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:40.106 #34 NEW cov: 11011 ft: 18132 corp: 10/82b lim: 9 exec/s: 34 rss: 74Mb L: 9/9 MS: 1 ChangeBinInt- 00:08:40.106 [2024-07-24 22:47:38.290472] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:40.106 [2024-07-24 22:47:38.290500] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:40.364 #35 NEW cov: 11011 ft: 18409 corp: 11/91b lim: 9 exec/s: 35 rss: 74Mb L: 9/9 MS: 1 CopyPart- 00:08:40.364 [2024-07-24 22:47:38.468040] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:40.364 [2024-07-24 22:47:38.468068] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:40.623 #36 NEW cov: 11011 ft: 18540 corp: 12/100b lim: 9 exec/s: 18 rss: 74Mb L: 9/9 MS: 1 CopyPart- 00:08:40.623 #36 DONE cov: 11011 ft: 18540 corp: 12/100b lim: 9 exec/s: 18 rss: 74Mb 00:08:40.623 Done 36 runs in 2 second(s) 00:08:40.623 [2024-07-24 22:47:38.594255] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-6/domain/2: disabling controller 00:08:40.882 22:47:38 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-6 /var/tmp/suppress_vfio_fuzz 00:08:40.882 22:47:38 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:40.882 22:47:38 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:40.882 22:47:38 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:08:40.882 00:08:40.882 real 0m19.495s 00:08:40.882 user 0m27.794s 00:08:40.882 sys 0m1.739s 00:08:40.882 22:47:38 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:40.882 22:47:38 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:08:40.882 ************************************ 00:08:40.882 END TEST vfio_llvm_fuzz 00:08:40.882 ************************************ 00:08:40.882 22:47:38 llvm_fuzz -- fuzz/llvm.sh@67 -- # [[ 1 -eq 0 ]] 00:08:40.882 00:08:40.882 real 1m25.021s 00:08:40.882 user 2m13.205s 00:08:40.882 sys 0m9.537s 00:08:40.882 22:47:38 llvm_fuzz -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:40.882 22:47:38 llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:08:40.882 ************************************ 00:08:40.882 END TEST llvm_fuzz 00:08:40.882 ************************************ 00:08:40.882 22:47:38 -- spdk/autotest.sh@379 -- # [[ 0 -eq 1 ]] 00:08:40.882 22:47:38 -- spdk/autotest.sh@384 -- # trap - SIGINT SIGTERM EXIT 00:08:40.882 22:47:38 -- spdk/autotest.sh@386 -- # timing_enter post_cleanup 00:08:40.882 22:47:38 -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:40.882 22:47:38 -- common/autotest_common.sh@10 -- # set +x 00:08:40.882 22:47:38 -- spdk/autotest.sh@387 -- # autotest_cleanup 00:08:40.882 22:47:38 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:08:40.882 22:47:38 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:08:40.882 22:47:38 -- common/autotest_common.sh@10 -- # set +x 00:08:45.076 INFO: APP EXITING 00:08:45.076 INFO: killing all VMs 00:08:45.076 INFO: killing vhost app 00:08:45.076 INFO: EXIT DONE 00:08:48.376 Waiting for block devices as requested 00:08:48.376 0000:dd:00.0 (8086 0a54): vfio-pci -> nvme 00:08:48.376 0000:df:00.0 (8086 0a54): vfio-pci -> nvme 00:08:48.635 0000:de:00.0 (8086 0953): vfio-pci -> nvme 00:08:48.635 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:08:48.635 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:08:48.635 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:08:48.894 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:08:48.895 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:08:48.895 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:08:49.154 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:08:49.154 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:08:49.154 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:08:49.414 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:08:49.414 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:08:49.414 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:08:49.414 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:08:49.673 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:08:49.673 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:08:49.673 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:08:49.931 0000:dc:00.0 (8086 0953): vfio-pci -> nvme 00:08:55.221 Cleaning 00:08:55.221 Removing: /dev/shm/spdk_tgt_trace.pid465219 00:08:55.221 Removing: /var/run/dpdk/spdk_pid461586 00:08:55.221 Removing: /var/run/dpdk/spdk_pid463252 00:08:55.221 Removing: /var/run/dpdk/spdk_pid465219 00:08:55.221 Removing: /var/run/dpdk/spdk_pid465824 00:08:55.221 Removing: /var/run/dpdk/spdk_pid467212 00:08:55.221 Removing: /var/run/dpdk/spdk_pid467432 00:08:55.221 Removing: /var/run/dpdk/spdk_pid468341 00:08:55.221 Removing: /var/run/dpdk/spdk_pid468558 00:08:55.221 Removing: /var/run/dpdk/spdk_pid468914 00:08:55.221 Removing: /var/run/dpdk/spdk_pid469173 00:08:55.221 Removing: /var/run/dpdk/spdk_pid469449 00:08:55.221 Removing: /var/run/dpdk/spdk_pid469729 00:08:55.221 Removing: /var/run/dpdk/spdk_pid470009 00:08:55.221 Removing: /var/run/dpdk/spdk_pid470246 00:08:55.221 Removing: /var/run/dpdk/spdk_pid470477 00:08:55.221 Removing: /var/run/dpdk/spdk_pid470741 00:08:55.221 Removing: /var/run/dpdk/spdk_pid471633 00:08:55.221 Removing: /var/run/dpdk/spdk_pid474448 00:08:55.221 Removing: /var/run/dpdk/spdk_pid474690 00:08:55.221 Removing: /var/run/dpdk/spdk_pid474930 00:08:55.221 Removing: /var/run/dpdk/spdk_pid474968 00:08:55.221 Removing: /var/run/dpdk/spdk_pid475400 00:08:55.221 Removing: /var/run/dpdk/spdk_pid475611 00:08:55.221 Removing: /var/run/dpdk/spdk_pid475871 00:08:55.221 Removing: /var/run/dpdk/spdk_pid476081 00:08:55.221 Removing: /var/run/dpdk/spdk_pid476330 00:08:55.221 Removing: /var/run/dpdk/spdk_pid476539 00:08:55.221 Removing: /var/run/dpdk/spdk_pid476627 00:08:55.221 Removing: /var/run/dpdk/spdk_pid476801 00:08:55.221 Removing: /var/run/dpdk/spdk_pid477324 00:08:55.221 Removing: /var/run/dpdk/spdk_pid477562 00:08:55.221 Removing: /var/run/dpdk/spdk_pid477783 00:08:55.221 Removing: /var/run/dpdk/spdk_pid477865 00:08:55.221 Removing: /var/run/dpdk/spdk_pid478490 00:08:55.221 Removing: /var/run/dpdk/spdk_pid478919 00:08:55.221 Removing: /var/run/dpdk/spdk_pid479357 00:08:55.221 Removing: /var/run/dpdk/spdk_pid479787 00:08:55.221 Removing: /var/run/dpdk/spdk_pid480222 00:08:55.221 Removing: /var/run/dpdk/spdk_pid480660 00:08:55.221 Removing: /var/run/dpdk/spdk_pid481087 00:08:55.221 Removing: /var/run/dpdk/spdk_pid481529 00:08:55.221 Removing: /var/run/dpdk/spdk_pid481944 00:08:55.221 Removing: /var/run/dpdk/spdk_pid482335 00:08:55.221 Removing: /var/run/dpdk/spdk_pid482740 00:08:55.221 Removing: /var/run/dpdk/spdk_pid483118 00:08:55.221 Removing: /var/run/dpdk/spdk_pid483516 00:08:55.221 Removing: /var/run/dpdk/spdk_pid483932 00:08:55.221 Removing: /var/run/dpdk/spdk_pid484369 00:08:55.221 Removing: /var/run/dpdk/spdk_pid484797 00:08:55.221 Removing: /var/run/dpdk/spdk_pid485232 00:08:55.221 Removing: /var/run/dpdk/spdk_pid485664 00:08:55.221 Removing: /var/run/dpdk/spdk_pid486099 00:08:55.221 Removing: /var/run/dpdk/spdk_pid486528 00:08:55.221 Removing: /var/run/dpdk/spdk_pid486974 00:08:55.221 Removing: /var/run/dpdk/spdk_pid487402 00:08:55.221 Removing: /var/run/dpdk/spdk_pid487841 00:08:55.221 Removing: /var/run/dpdk/spdk_pid488233 00:08:55.221 Removing: /var/run/dpdk/spdk_pid488546 00:08:55.221 Removing: /var/run/dpdk/spdk_pid489164 00:08:55.221 Removing: /var/run/dpdk/spdk_pid489621 00:08:55.221 Removing: /var/run/dpdk/spdk_pid490082 00:08:55.221 Removing: /var/run/dpdk/spdk_pid490521 00:08:55.221 Removing: /var/run/dpdk/spdk_pid490959 00:08:55.221 Removing: /var/run/dpdk/spdk_pid491390 00:08:55.221 Removing: /var/run/dpdk/spdk_pid491826 00:08:55.221 Clean 00:08:55.221 22:47:53 -- common/autotest_common.sh@1451 -- # return 0 00:08:55.221 22:47:53 -- spdk/autotest.sh@388 -- # timing_exit post_cleanup 00:08:55.221 22:47:53 -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:55.221 22:47:53 -- common/autotest_common.sh@10 -- # set +x 00:08:55.221 22:47:53 -- spdk/autotest.sh@390 -- # timing_exit autotest 00:08:55.221 22:47:53 -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:55.221 22:47:53 -- common/autotest_common.sh@10 -- # set +x 00:08:55.221 22:47:53 -- spdk/autotest.sh@391 -- # chmod a+r /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/timing.txt 00:08:55.221 22:47:53 -- spdk/autotest.sh@393 -- # [[ -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/udev.log ]] 00:08:55.221 22:47:53 -- spdk/autotest.sh@393 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/udev.log 00:08:55.221 22:47:53 -- spdk/autotest.sh@395 -- # hash lcov 00:08:55.221 22:47:53 -- spdk/autotest.sh@395 -- # [[ CC_TYPE=clang == *\c\l\a\n\g* ]] 00:08:55.221 22:47:53 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:08:55.221 22:47:53 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:08:55.221 22:47:53 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:55.221 22:47:53 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:55.221 22:47:53 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:55.221 22:47:53 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:55.221 22:47:53 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:55.221 22:47:53 -- paths/export.sh@5 -- $ export PATH 00:08:55.221 22:47:53 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:55.221 22:47:53 -- common/autobuild_common.sh@446 -- $ out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output 00:08:55.221 22:47:53 -- common/autobuild_common.sh@447 -- $ date +%s 00:08:55.221 22:47:53 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721854073.XXXXXX 00:08:55.221 22:47:53 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721854073.kkM4U0 00:08:55.221 22:47:53 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:08:55.221 22:47:53 -- common/autobuild_common.sh@453 -- $ '[' -n '' ']' 00:08:55.221 22:47:53 -- common/autobuild_common.sh@456 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/' 00:08:55.221 22:47:53 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/xnvme --exclude /tmp' 00:08:55.221 22:47:53 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:08:55.221 22:47:53 -- common/autobuild_common.sh@463 -- $ get_config_params 00:08:55.221 22:47:53 -- common/autotest_common.sh@398 -- $ xtrace_disable 00:08:55.221 22:47:53 -- common/autotest_common.sh@10 -- $ set +x 00:08:55.221 22:47:53 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:08:55.221 22:47:53 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:08:55.221 22:47:53 -- pm/common@17 -- $ local monitor 00:08:55.221 22:47:53 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:08:55.221 22:47:53 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:08:55.221 22:47:53 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:08:55.221 22:47:53 -- pm/common@21 -- $ date +%s 00:08:55.221 22:47:53 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:08:55.221 22:47:53 -- pm/common@21 -- $ date +%s 00:08:55.221 22:47:53 -- pm/common@25 -- $ sleep 1 00:08:55.221 22:47:53 -- pm/common@21 -- $ date +%s 00:08:55.221 22:47:53 -- pm/common@21 -- $ date +%s 00:08:55.222 22:47:53 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721854073 00:08:55.222 22:47:53 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721854073 00:08:55.222 22:47:53 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721854073 00:08:55.222 22:47:53 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721854073 00:08:55.222 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721854073_collect-vmstat.pm.log 00:08:55.222 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721854073_collect-cpu-load.pm.log 00:08:55.222 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721854073_collect-cpu-temp.pm.log 00:08:55.222 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721854073_collect-bmc-pm.bmc.pm.log 00:08:56.159 22:47:54 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:08:56.159 22:47:54 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j88 00:08:56.159 22:47:54 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:08:56.159 22:47:54 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:08:56.159 22:47:54 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:08:56.159 22:47:54 -- spdk/autopackage.sh@19 -- $ timing_finish 00:08:56.159 22:47:54 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:08:56.159 22:47:54 -- common/autotest_common.sh@737 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:08:56.159 22:47:54 -- common/autotest_common.sh@739 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/timing.txt 00:08:56.159 22:47:54 -- spdk/autopackage.sh@20 -- $ exit 0 00:08:56.159 22:47:54 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:08:56.159 22:47:54 -- pm/common@29 -- $ signal_monitor_resources TERM 00:08:56.159 22:47:54 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:08:56.159 22:47:54 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:08:56.159 22:47:54 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:08:56.159 22:47:54 -- pm/common@44 -- $ pid=498591 00:08:56.159 22:47:54 -- pm/common@50 -- $ kill -TERM 498591 00:08:56.159 22:47:54 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:08:56.159 22:47:54 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:08:56.159 22:47:54 -- pm/common@44 -- $ pid=498593 00:08:56.159 22:47:54 -- pm/common@50 -- $ kill -TERM 498593 00:08:56.159 22:47:54 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:08:56.159 22:47:54 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:08:56.159 22:47:54 -- pm/common@44 -- $ pid=498596 00:08:56.159 22:47:54 -- pm/common@50 -- $ kill -TERM 498596 00:08:56.159 22:47:54 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:08:56.160 22:47:54 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:08:56.160 22:47:54 -- pm/common@44 -- $ pid=498631 00:08:56.160 22:47:54 -- pm/common@50 -- $ sudo -E kill -TERM 498631 00:08:56.160 + [[ -n 344566 ]] 00:08:56.160 + sudo kill 344566 00:08:56.169 [Pipeline] } 00:08:56.189 [Pipeline] // stage 00:08:56.194 [Pipeline] } 00:08:56.211 [Pipeline] // timeout 00:08:56.217 [Pipeline] } 00:08:56.237 [Pipeline] // catchError 00:08:56.242 [Pipeline] } 00:08:56.260 [Pipeline] // wrap 00:08:56.266 [Pipeline] } 00:08:56.281 [Pipeline] // catchError 00:08:56.292 [Pipeline] stage 00:08:56.295 [Pipeline] { (Epilogue) 00:08:56.311 [Pipeline] catchError 00:08:56.313 [Pipeline] { 00:08:56.330 [Pipeline] echo 00:08:56.332 Cleanup processes 00:08:56.338 [Pipeline] sh 00:08:56.627 + sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:08:56.627 498777 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/sdr.cache 00:08:56.627 499566 sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:08:56.644 [Pipeline] sh 00:08:56.930 ++ sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:08:56.930 ++ grep -v 'sudo pgrep' 00:08:56.930 ++ awk '{print $1}' 00:08:56.930 + sudo kill -9 498777 00:08:56.943 [Pipeline] sh 00:08:57.226 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:08:58.613 [Pipeline] sh 00:08:58.898 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:08:58.898 Artifacts sizes are good 00:08:58.911 [Pipeline] archiveArtifacts 00:08:58.917 Archiving artifacts 00:08:59.005 [Pipeline] sh 00:08:59.292 + sudo chown -R sys_sgci /var/jenkins/workspace/short-fuzz-phy-autotest 00:08:59.306 [Pipeline] cleanWs 00:08:59.315 [WS-CLEANUP] Deleting project workspace... 00:08:59.315 [WS-CLEANUP] Deferred wipeout is used... 00:08:59.322 [WS-CLEANUP] done 00:08:59.324 [Pipeline] } 00:08:59.344 [Pipeline] // catchError 00:08:59.356 [Pipeline] sh 00:08:59.720 + logger -p user.info -t JENKINS-CI 00:08:59.739 [Pipeline] } 00:08:59.755 [Pipeline] // stage 00:08:59.761 [Pipeline] } 00:08:59.778 [Pipeline] // node 00:08:59.784 [Pipeline] End of Pipeline 00:08:59.824 Finished: SUCCESS