00:00:00.001 Started by upstream project "autotest-per-patch" build number 124199 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.029 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/short-fuzz-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.030 The recommended git tool is: git 00:00:00.030 using credential 00000000-0000-0000-0000-000000000002 00:00:00.031 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/short-fuzz-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.051 Fetching changes from the remote Git repository 00:00:00.054 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.097 Using shallow fetch with depth 1 00:00:00.097 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.097 > git --version # timeout=10 00:00:00.173 > git --version # 'git version 2.39.2' 00:00:00.173 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.257 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.257 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:03.495 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:03.511 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:03.528 Checking out Revision 9bbc799d7020f50509d938dbe97dc05da0c1b5c3 (FETCH_HEAD) 00:00:03.528 > git config core.sparsecheckout # timeout=10 00:00:03.540 > git read-tree -mu HEAD # timeout=10 00:00:03.557 > git checkout -f 9bbc799d7020f50509d938dbe97dc05da0c1b5c3 # timeout=5 00:00:03.581 Commit message: "pool: fixes for VisualBuild class" 00:00:03.581 > git rev-list --no-walk 9bbc799d7020f50509d938dbe97dc05da0c1b5c3 # timeout=10 00:00:03.690 [Pipeline] Start of Pipeline 00:00:03.704 [Pipeline] library 00:00:03.705 Loading library shm_lib@master 00:00:03.706 Library shm_lib@master is cached. Copying from home. 00:00:03.723 [Pipeline] node 00:00:18.725 Still waiting to schedule task 00:00:18.726 ‘FCP03’ doesn’t have label ‘DiskNvme’ 00:00:18.726 ‘FCP04’ doesn’t have label ‘DiskNvme’ 00:00:18.726 ‘FCP07’ doesn’t have label ‘DiskNvme’ 00:00:18.726 ‘FCP08’ doesn’t have label ‘DiskNvme’ 00:00:18.726 ‘FCP09’ doesn’t have label ‘DiskNvme’ 00:00:18.726 ‘FCP10’ doesn’t have label ‘DiskNvme’ 00:00:18.726 ‘FCP11’ doesn’t have label ‘DiskNvme’ 00:00:18.726 ‘FCP12’ doesn’t have label ‘DiskNvme’ 00:00:18.726 ‘GP10’ doesn’t have label ‘DiskNvme’ 00:00:18.726 ‘GP11’ is offline 00:00:18.726 ‘GP12’ is offline 00:00:18.726 ‘GP13’ is offline 00:00:18.726 ‘GP14’ is offline 00:00:18.726 ‘GP15’ doesn’t have label ‘DiskNvme’ 00:00:18.726 ‘GP16’ doesn’t have label ‘DiskNvme’ 00:00:18.726 ‘GP18’ doesn’t have label ‘DiskNvme’ 00:00:18.726 ‘GP19’ doesn’t have label ‘DiskNvme’ 00:00:18.727 ‘GP1’ is offline 00:00:18.727 ‘GP20’ doesn’t have label ‘DiskNvme’ 00:00:18.727 ‘GP21’ doesn’t have label ‘DiskNvme’ 00:00:18.727 ‘GP22’ doesn’t have label ‘DiskNvme’ 00:00:18.727 ‘GP24’ doesn’t have label ‘DiskNvme’ 00:00:18.727 ‘GP2’ is offline 00:00:18.727 ‘GP3’ is offline 00:00:18.727 ‘GP4’ is offline 00:00:18.727 ‘GP5’ doesn’t have label ‘DiskNvme’ 00:00:18.727 ‘GP6’ is offline 00:00:18.727 ‘GP8’ is offline 00:00:18.728 ‘GP9’ is offline 00:00:18.728 ‘ImageBuilder1’ doesn’t have label ‘DiskNvme’ 00:00:18.728 ‘Jenkins’ doesn’t have label ‘DiskNvme’ 00:00:18.728 ‘ME1’ doesn’t have label ‘DiskNvme’ 00:00:18.728 ‘ME2’ doesn’t have label ‘DiskNvme’ 00:00:18.728 ‘ME3’ doesn’t have label ‘DiskNvme’ 00:00:18.728 ‘PE5’ doesn’t have label ‘DiskNvme’ 00:00:18.728 ‘SM10’ is offline 00:00:18.728 ‘SM11’ is offline 00:00:18.728 ‘SM1’ doesn’t have label ‘DiskNvme’ 00:00:18.728 ‘SM28’ doesn’t have label ‘DiskNvme’ 00:00:18.728 ‘SM29’ doesn’t have label ‘DiskNvme’ 00:00:18.728 ‘SM2’ doesn’t have label ‘DiskNvme’ 00:00:18.728 ‘SM30’ doesn’t have label ‘DiskNvme’ 00:00:18.728 ‘SM31’ doesn’t have label ‘DiskNvme’ 00:00:18.728 ‘SM32’ doesn’t have label ‘DiskNvme’ 00:00:18.728 ‘SM33’ doesn’t have label ‘DiskNvme’ 00:00:18.728 ‘SM34’ doesn’t have label ‘DiskNvme’ 00:00:18.728 ‘SM35’ doesn’t have label ‘DiskNvme’ 00:00:18.728 ‘SM6’ doesn’t have label ‘DiskNvme’ 00:00:18.728 ‘SM7’ doesn’t have label ‘DiskNvme’ 00:00:18.728 ‘SM8’ doesn’t have label ‘DiskNvme’ 00:00:18.728 ‘VM-host-WFP25’ doesn’t have label ‘DiskNvme’ 00:00:18.728 ‘WCP0’ doesn’t have label ‘DiskNvme’ 00:00:18.728 ‘WCP2’ is offline 00:00:18.728 ‘WCP4’ is offline 00:00:18.729 ‘WFP13’ is offline 00:00:18.729 ‘WFP17’ is offline 00:00:18.729 ‘WFP21’ is offline 00:00:18.729 ‘WFP23’ is offline 00:00:18.729 ‘WFP29’ is offline 00:00:18.729 ‘WFP2’ doesn’t have label ‘DiskNvme’ 00:00:18.729 ‘WFP32’ is offline 00:00:18.730 ‘WFP33’ is offline 00:00:18.730 ‘WFP34’ is offline 00:00:18.730 ‘WFP35’ is offline 00:00:18.730 ‘WFP36’ is offline 00:00:18.730 ‘WFP37’ is offline 00:00:18.730 ‘WFP38’ is offline 00:00:18.730 ‘WFP41’ is offline 00:00:18.731 ‘WFP43’ is offline 00:00:18.731 ‘WFP49’ is offline 00:00:18.731 ‘WFP50’ is offline 00:00:18.731 ‘WFP63’ doesn’t have label ‘DiskNvme’ 00:00:18.731 ‘WFP65’ is offline 00:00:18.731 ‘WFP66’ is offline 00:00:18.731 ‘WFP67’ is offline 00:00:18.731 ‘WFP68’ is offline 00:00:18.731 ‘WFP69’ doesn’t have label ‘DiskNvme’ 00:00:18.731 ‘WFP6’ is offline 00:00:18.732 ‘WFP8’ is offline 00:00:18.732 ‘WFP9’ is offline 00:00:18.732 ‘prc_bsc_waikikibeach64’ doesn’t have label ‘DiskNvme’ 00:00:18.732 ‘spdk-pxe-01’ doesn’t have label ‘DiskNvme’ 00:00:18.732 ‘spdk-pxe-02’ doesn’t have label ‘DiskNvme’ 00:18:29.385 Running on WFP42 in /var/jenkins/workspace/short-fuzz-phy-autotest 00:18:29.387 [Pipeline] { 00:18:29.401 [Pipeline] catchError 00:18:29.403 [Pipeline] { 00:18:29.420 [Pipeline] wrap 00:18:29.432 [Pipeline] { 00:18:29.442 [Pipeline] stage 00:18:29.445 [Pipeline] { (Prologue) 00:18:29.635 [Pipeline] sh 00:18:29.917 + logger -p user.info -t JENKINS-CI 00:18:29.937 [Pipeline] echo 00:18:29.939 Node: WFP42 00:18:29.948 [Pipeline] sh 00:18:30.249 [Pipeline] setCustomBuildProperty 00:18:30.263 [Pipeline] echo 00:18:30.265 Cleanup processes 00:18:30.271 [Pipeline] sh 00:18:30.555 + sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:18:30.555 3208692 sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:18:30.568 [Pipeline] sh 00:18:30.851 ++ grep -v 'sudo pgrep' 00:18:30.851 ++ sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:18:30.851 ++ awk '{print $1}' 00:18:30.851 + sudo kill -9 00:18:30.851 + true 00:18:30.865 [Pipeline] cleanWs 00:18:30.875 [WS-CLEANUP] Deleting project workspace... 00:18:30.875 [WS-CLEANUP] Deferred wipeout is used... 00:18:30.881 [WS-CLEANUP] done 00:18:30.886 [Pipeline] setCustomBuildProperty 00:18:30.900 [Pipeline] sh 00:18:31.177 + sudo git config --global --replace-all safe.directory '*' 00:18:31.259 [Pipeline] nodesByLabel 00:18:31.261 Found a total of 2 nodes with the 'sorcerer' label 00:18:31.270 [Pipeline] httpRequest 00:18:31.275 HttpMethod: GET 00:18:31.276 URL: http://10.211.164.101/packages/jbp_9bbc799d7020f50509d938dbe97dc05da0c1b5c3.tar.gz 00:18:31.280 Sending request to url: http://10.211.164.101/packages/jbp_9bbc799d7020f50509d938dbe97dc05da0c1b5c3.tar.gz 00:18:31.284 Response Code: HTTP/1.1 200 OK 00:18:31.284 Success: Status code 200 is in the accepted range: 200,404 00:18:31.285 Saving response body to /var/jenkins/workspace/short-fuzz-phy-autotest/jbp_9bbc799d7020f50509d938dbe97dc05da0c1b5c3.tar.gz 00:18:31.529 [Pipeline] sh 00:18:31.811 + tar --no-same-owner -xf jbp_9bbc799d7020f50509d938dbe97dc05da0c1b5c3.tar.gz 00:18:31.829 [Pipeline] httpRequest 00:18:31.834 HttpMethod: GET 00:18:31.834 URL: http://10.211.164.101/packages/spdk_a3f6419f1f5db09443dad17e3dc59785ff2e1617.tar.gz 00:18:31.835 Sending request to url: http://10.211.164.101/packages/spdk_a3f6419f1f5db09443dad17e3dc59785ff2e1617.tar.gz 00:18:31.838 Response Code: HTTP/1.1 200 OK 00:18:31.839 Success: Status code 200 is in the accepted range: 200,404 00:18:31.839 Saving response body to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk_a3f6419f1f5db09443dad17e3dc59785ff2e1617.tar.gz 00:18:35.751 [Pipeline] sh 00:18:36.035 + tar --no-same-owner -xf spdk_a3f6419f1f5db09443dad17e3dc59785ff2e1617.tar.gz 00:18:38.582 [Pipeline] sh 00:18:38.864 + git -C spdk log --oneline -n5 00:18:38.865 a3f6419f1 app/nvme_identify: Add NVM Identify Namespace Data for ELBA Format 00:18:38.865 3b7525570 nvme: Get PI format for Extended LBA format 00:18:38.865 1e8a0c991 nvme: Get NVM Identify Namespace Data for Extended LBA Format 00:18:38.865 493b11851 nvme: Use Host Behavior Support Feature to enable LBA Format Extension 00:18:38.865 e2612f201 nvme: Factor out getting ZNS Identify Namespace Data 00:18:38.877 [Pipeline] } 00:18:38.895 [Pipeline] // stage 00:18:38.906 [Pipeline] stage 00:18:38.909 [Pipeline] { (Prepare) 00:18:38.930 [Pipeline] writeFile 00:18:38.949 [Pipeline] sh 00:18:39.231 + logger -p user.info -t JENKINS-CI 00:18:39.245 [Pipeline] sh 00:18:39.527 + logger -p user.info -t JENKINS-CI 00:18:39.541 [Pipeline] sh 00:18:39.824 + cat autorun-spdk.conf 00:18:39.824 SPDK_RUN_FUNCTIONAL_TEST=1 00:18:39.824 SPDK_TEST_FUZZER_SHORT=1 00:18:39.824 SPDK_TEST_FUZZER=1 00:18:39.824 SPDK_RUN_UBSAN=1 00:18:39.831 RUN_NIGHTLY=0 00:18:39.837 [Pipeline] readFile 00:18:39.864 [Pipeline] withEnv 00:18:39.867 [Pipeline] { 00:18:39.884 [Pipeline] sh 00:18:40.170 + set -ex 00:18:40.170 + [[ -f /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf ]] 00:18:40.170 + source /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf 00:18:40.170 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:18:40.170 ++ SPDK_TEST_FUZZER_SHORT=1 00:18:40.170 ++ SPDK_TEST_FUZZER=1 00:18:40.170 ++ SPDK_RUN_UBSAN=1 00:18:40.170 ++ RUN_NIGHTLY=0 00:18:40.170 + case $SPDK_TEST_NVMF_NICS in 00:18:40.170 + DRIVERS= 00:18:40.170 + [[ -n '' ]] 00:18:40.170 + exit 0 00:18:40.179 [Pipeline] } 00:18:40.199 [Pipeline] // withEnv 00:18:40.205 [Pipeline] } 00:18:40.223 [Pipeline] // stage 00:18:40.235 [Pipeline] catchError 00:18:40.237 [Pipeline] { 00:18:40.253 [Pipeline] timeout 00:18:40.253 Timeout set to expire in 30 min 00:18:40.255 [Pipeline] { 00:18:40.274 [Pipeline] stage 00:18:40.276 [Pipeline] { (Tests) 00:18:40.293 [Pipeline] sh 00:18:40.577 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/short-fuzz-phy-autotest 00:18:40.577 ++ readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest 00:18:40.577 + DIR_ROOT=/var/jenkins/workspace/short-fuzz-phy-autotest 00:18:40.577 + [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest ]] 00:18:40.577 + DIR_SPDK=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:18:40.577 + DIR_OUTPUT=/var/jenkins/workspace/short-fuzz-phy-autotest/output 00:18:40.577 + [[ -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk ]] 00:18:40.577 + [[ ! -d /var/jenkins/workspace/short-fuzz-phy-autotest/output ]] 00:18:40.577 + mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/output 00:18:40.577 + [[ -d /var/jenkins/workspace/short-fuzz-phy-autotest/output ]] 00:18:40.577 + [[ short-fuzz-phy-autotest == pkgdep-* ]] 00:18:40.577 + cd /var/jenkins/workspace/short-fuzz-phy-autotest 00:18:40.577 + source /etc/os-release 00:18:40.577 ++ NAME='Fedora Linux' 00:18:40.577 ++ VERSION='38 (Cloud Edition)' 00:18:40.577 ++ ID=fedora 00:18:40.577 ++ VERSION_ID=38 00:18:40.577 ++ VERSION_CODENAME= 00:18:40.577 ++ PLATFORM_ID=platform:f38 00:18:40.577 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:18:40.577 ++ ANSI_COLOR='0;38;2;60;110;180' 00:18:40.577 ++ LOGO=fedora-logo-icon 00:18:40.577 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:18:40.577 ++ HOME_URL=https://fedoraproject.org/ 00:18:40.577 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:18:40.577 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:18:40.577 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:18:40.577 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:18:40.577 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:18:40.577 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:18:40.577 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:18:40.577 ++ SUPPORT_END=2024-05-14 00:18:40.577 ++ VARIANT='Cloud Edition' 00:18:40.577 ++ VARIANT_ID=cloud 00:18:40.577 + uname -a 00:18:40.577 Linux spdk-wfp-42 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:18:40.577 + sudo /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh status 00:18:43.864 Hugepages 00:18:43.864 node hugesize free / total 00:18:43.864 node0 1048576kB 0 / 0 00:18:43.864 node0 2048kB 0 / 0 00:18:43.864 node1 1048576kB 0 / 0 00:18:43.864 node1 2048kB 0 / 0 00:18:43.864 00:18:43.864 Type BDF Vendor Device NUMA Driver Device Block devices 00:18:43.864 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:18:43.864 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:18:43.864 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:18:43.864 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:18:43.864 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:18:43.864 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:18:43.864 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:18:43.864 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:18:43.864 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:18:43.864 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:18:43.864 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:18:43.864 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:18:43.864 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:18:43.864 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:18:43.864 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:18:43.864 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:18:43.864 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:18:43.864 + rm -f /tmp/spdk-ld-path 00:18:43.864 + source autorun-spdk.conf 00:18:43.864 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:18:43.864 ++ SPDK_TEST_FUZZER_SHORT=1 00:18:43.864 ++ SPDK_TEST_FUZZER=1 00:18:43.864 ++ SPDK_RUN_UBSAN=1 00:18:43.864 ++ RUN_NIGHTLY=0 00:18:43.864 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:18:43.864 + [[ -n '' ]] 00:18:43.864 + sudo git config --global --add safe.directory /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:18:43.864 + for M in /var/spdk/build-*-manifest.txt 00:18:43.864 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:18:43.864 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/short-fuzz-phy-autotest/output/ 00:18:43.864 + for M in /var/spdk/build-*-manifest.txt 00:18:43.864 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:18:43.864 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/short-fuzz-phy-autotest/output/ 00:18:43.864 ++ uname 00:18:43.864 + [[ Linux == \L\i\n\u\x ]] 00:18:43.864 + sudo dmesg -T 00:18:43.864 + sudo dmesg --clear 00:18:43.864 + dmesg_pid=3209650 00:18:43.864 + [[ Fedora Linux == FreeBSD ]] 00:18:43.864 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:18:43.864 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:18:43.865 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:18:43.865 + export VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:18:43.865 + VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:18:43.865 + [[ -x /usr/src/fio-static/fio ]] 00:18:43.865 + sudo dmesg -Tw 00:18:43.865 + export FIO_BIN=/usr/src/fio-static/fio 00:18:43.865 + FIO_BIN=/usr/src/fio-static/fio 00:18:43.865 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\s\h\o\r\t\-\f\u\z\z\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:18:43.865 + [[ ! -v VFIO_QEMU_BIN ]] 00:18:43.865 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:18:43.865 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:18:43.865 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:18:43.865 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:18:43.865 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:18:43.865 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:18:43.865 + spdk/autorun.sh /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf 00:18:43.865 Test configuration: 00:18:43.865 SPDK_RUN_FUNCTIONAL_TEST=1 00:18:43.865 SPDK_TEST_FUZZER_SHORT=1 00:18:43.865 SPDK_TEST_FUZZER=1 00:18:43.865 SPDK_RUN_UBSAN=1 00:18:43.865 RUN_NIGHTLY=0 11:29:27 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:18:43.865 11:29:27 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:18:43.865 11:29:27 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:43.865 11:29:27 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:43.865 11:29:27 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:43.865 11:29:27 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:43.865 11:29:27 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:43.865 11:29:27 -- paths/export.sh@5 -- $ export PATH 00:18:43.865 11:29:27 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:43.865 11:29:27 -- common/autobuild_common.sh@436 -- $ out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output 00:18:43.865 11:29:27 -- common/autobuild_common.sh@437 -- $ date +%s 00:18:43.865 11:29:27 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1718011767.XXXXXX 00:18:43.865 11:29:27 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1718011767.0lfkK2 00:18:43.865 11:29:27 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:18:43.865 11:29:27 -- common/autobuild_common.sh@443 -- $ '[' -n '' ']' 00:18:43.865 11:29:27 -- common/autobuild_common.sh@446 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/' 00:18:43.865 11:29:27 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/xnvme --exclude /tmp' 00:18:43.865 11:29:27 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:18:43.865 11:29:27 -- common/autobuild_common.sh@453 -- $ get_config_params 00:18:43.865 11:29:27 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:18:43.865 11:29:27 -- common/autotest_common.sh@10 -- $ set +x 00:18:43.865 11:29:27 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:18:43.865 11:29:27 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:18:43.865 11:29:27 -- pm/common@17 -- $ local monitor 00:18:43.865 11:29:27 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:18:43.865 11:29:27 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:18:43.865 11:29:27 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:18:43.865 11:29:27 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:18:43.865 11:29:27 -- pm/common@25 -- $ sleep 1 00:18:43.865 11:29:27 -- pm/common@21 -- $ date +%s 00:18:43.865 11:29:27 -- pm/common@21 -- $ date +%s 00:18:43.865 11:29:27 -- pm/common@21 -- $ date +%s 00:18:43.865 11:29:27 -- pm/common@21 -- $ date +%s 00:18:43.865 11:29:27 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1718011767 00:18:43.865 11:29:27 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1718011767 00:18:43.865 11:29:27 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1718011767 00:18:43.865 11:29:27 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1718011767 00:18:43.865 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1718011767_collect-vmstat.pm.log 00:18:43.865 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1718011767_collect-cpu-load.pm.log 00:18:43.865 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1718011767_collect-cpu-temp.pm.log 00:18:43.865 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1718011767_collect-bmc-pm.bmc.pm.log 00:18:44.801 11:29:28 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:18:44.801 11:29:28 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:18:44.801 11:29:28 -- spdk/autobuild.sh@12 -- $ umask 022 00:18:44.801 11:29:28 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:18:44.801 11:29:28 -- spdk/autobuild.sh@16 -- $ date -u 00:18:44.801 Mon Jun 10 09:29:28 AM UTC 2024 00:18:44.801 11:29:28 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:18:44.801 v24.09-pre-59-ga3f6419f1 00:18:44.801 11:29:28 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:18:44.801 11:29:28 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:18:44.801 11:29:28 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:18:44.801 11:29:28 -- common/autotest_common.sh@1100 -- $ '[' 3 -le 1 ']' 00:18:44.801 11:29:28 -- common/autotest_common.sh@1106 -- $ xtrace_disable 00:18:44.801 11:29:28 -- common/autotest_common.sh@10 -- $ set +x 00:18:45.060 ************************************ 00:18:45.060 START TEST ubsan 00:18:45.060 ************************************ 00:18:45.060 11:29:28 ubsan -- common/autotest_common.sh@1124 -- $ echo 'using ubsan' 00:18:45.060 using ubsan 00:18:45.060 00:18:45.060 real 0m0.000s 00:18:45.060 user 0m0.000s 00:18:45.060 sys 0m0.000s 00:18:45.060 11:29:28 ubsan -- common/autotest_common.sh@1125 -- $ xtrace_disable 00:18:45.060 11:29:28 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:18:45.060 ************************************ 00:18:45.060 END TEST ubsan 00:18:45.060 ************************************ 00:18:45.060 11:29:28 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:18:45.060 11:29:28 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:18:45.060 11:29:28 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:18:45.060 11:29:28 -- spdk/autobuild.sh@51 -- $ [[ 1 -eq 1 ]] 00:18:45.060 11:29:28 -- spdk/autobuild.sh@52 -- $ llvm_precompile 00:18:45.060 11:29:28 -- common/autobuild_common.sh@425 -- $ run_test autobuild_llvm_precompile _llvm_precompile 00:18:45.060 11:29:28 -- common/autotest_common.sh@1100 -- $ '[' 2 -le 1 ']' 00:18:45.060 11:29:28 -- common/autotest_common.sh@1106 -- $ xtrace_disable 00:18:45.060 11:29:28 -- common/autotest_common.sh@10 -- $ set +x 00:18:45.060 ************************************ 00:18:45.060 START TEST autobuild_llvm_precompile 00:18:45.060 ************************************ 00:18:45.060 11:29:28 autobuild_llvm_precompile -- common/autotest_common.sh@1124 -- $ _llvm_precompile 00:18:45.060 11:29:28 autobuild_llvm_precompile -- common/autobuild_common.sh@32 -- $ clang --version 00:18:45.318 11:29:29 autobuild_llvm_precompile -- common/autobuild_common.sh@32 -- $ [[ clang version 16.0.6 (Fedora 16.0.6-3.fc38) 00:18:45.318 Target: x86_64-redhat-linux-gnu 00:18:45.318 Thread model: posix 00:18:45.318 InstalledDir: /usr/bin =~ version (([0-9]+).([0-9]+).([0-9]+)) ]] 00:18:45.318 11:29:29 autobuild_llvm_precompile -- common/autobuild_common.sh@33 -- $ clang_num=16 00:18:45.318 11:29:29 autobuild_llvm_precompile -- common/autobuild_common.sh@35 -- $ export CC=clang-16 00:18:45.318 11:29:29 autobuild_llvm_precompile -- common/autobuild_common.sh@35 -- $ CC=clang-16 00:18:45.318 11:29:29 autobuild_llvm_precompile -- common/autobuild_common.sh@36 -- $ export CXX=clang++-16 00:18:45.318 11:29:29 autobuild_llvm_precompile -- common/autobuild_common.sh@36 -- $ CXX=clang++-16 00:18:45.318 11:29:29 autobuild_llvm_precompile -- common/autobuild_common.sh@38 -- $ fuzzer_libs=(/usr/lib*/clang/@("$clang_num"|"$clang_version")/lib/*linux*/libclang_rt.fuzzer_no_main?(-x86_64).a) 00:18:45.318 11:29:29 autobuild_llvm_precompile -- common/autobuild_common.sh@39 -- $ fuzzer_lib=/usr/lib64/clang/16/lib/linux/libclang_rt.fuzzer_no_main-x86_64.a 00:18:45.318 11:29:29 autobuild_llvm_precompile -- common/autobuild_common.sh@40 -- $ [[ -e /usr/lib64/clang/16/lib/linux/libclang_rt.fuzzer_no_main-x86_64.a ]] 00:18:45.318 11:29:29 autobuild_llvm_precompile -- common/autobuild_common.sh@42 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-fuzzer=/usr/lib64/clang/16/lib/linux/libclang_rt.fuzzer_no_main-x86_64.a' 00:18:45.318 11:29:29 autobuild_llvm_precompile -- common/autobuild_common.sh@44 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-fuzzer=/usr/lib64/clang/16/lib/linux/libclang_rt.fuzzer_no_main-x86_64.a 00:18:45.886 Using default SPDK env in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:18:45.886 Using default DPDK in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:18:46.523 Using 'verbs' RDMA provider 00:19:02.772 Configuring ISA-L (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal.log)...done. 00:19:14.976 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:19:14.976 Creating mk/config.mk...done. 00:19:14.976 Creating mk/cc.flags.mk...done. 00:19:14.976 Type 'make' to build. 00:19:14.976 00:19:14.976 real 0m29.031s 00:19:14.976 user 0m12.165s 00:19:14.976 sys 0m16.121s 00:19:14.976 11:29:57 autobuild_llvm_precompile -- common/autotest_common.sh@1125 -- $ xtrace_disable 00:19:14.976 11:29:57 autobuild_llvm_precompile -- common/autotest_common.sh@10 -- $ set +x 00:19:14.976 ************************************ 00:19:14.976 END TEST autobuild_llvm_precompile 00:19:14.976 ************************************ 00:19:14.976 11:29:57 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:19:14.976 11:29:57 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:19:14.976 11:29:57 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:19:14.976 11:29:57 -- spdk/autobuild.sh@62 -- $ [[ 1 -eq 1 ]] 00:19:14.976 11:29:57 -- spdk/autobuild.sh@64 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-fuzzer=/usr/lib64/clang/16/lib/linux/libclang_rt.fuzzer_no_main-x86_64.a 00:19:14.976 Using default SPDK env in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:19:14.976 Using default DPDK in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:19:14.976 Using 'verbs' RDMA provider 00:19:28.114 Configuring ISA-L (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal.log)...done. 00:19:38.099 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:19:38.666 Creating mk/config.mk...done. 00:19:38.666 Creating mk/cc.flags.mk...done. 00:19:38.666 Type 'make' to build. 00:19:38.666 11:30:22 -- spdk/autobuild.sh@69 -- $ run_test make make -j72 00:19:38.666 11:30:22 -- common/autotest_common.sh@1100 -- $ '[' 3 -le 1 ']' 00:19:38.666 11:30:22 -- common/autotest_common.sh@1106 -- $ xtrace_disable 00:19:38.666 11:30:22 -- common/autotest_common.sh@10 -- $ set +x 00:19:38.666 ************************************ 00:19:38.666 START TEST make 00:19:38.666 ************************************ 00:19:38.666 11:30:22 make -- common/autotest_common.sh@1124 -- $ make -j72 00:19:38.925 make[1]: Nothing to be done for 'all'. 00:19:40.845 The Meson build system 00:19:40.845 Version: 1.3.1 00:19:40.845 Source dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/libvfio-user 00:19:40.845 Build dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug 00:19:40.845 Build type: native build 00:19:40.845 Project name: libvfio-user 00:19:40.845 Project version: 0.0.1 00:19:40.845 C compiler for the host machine: clang-16 (clang 16.0.6 "clang version 16.0.6 (Fedora 16.0.6-3.fc38)") 00:19:40.845 C linker for the host machine: clang-16 ld.bfd 2.39-16 00:19:40.845 Host machine cpu family: x86_64 00:19:40.845 Host machine cpu: x86_64 00:19:40.845 Run-time dependency threads found: YES 00:19:40.845 Library dl found: YES 00:19:40.845 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:19:40.845 Run-time dependency json-c found: YES 0.17 00:19:40.845 Run-time dependency cmocka found: YES 1.1.7 00:19:40.845 Program pytest-3 found: NO 00:19:40.845 Program flake8 found: NO 00:19:40.845 Program misspell-fixer found: NO 00:19:40.845 Program restructuredtext-lint found: NO 00:19:40.845 Program valgrind found: YES (/usr/bin/valgrind) 00:19:40.845 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:19:40.845 Compiler for C supports arguments -Wmissing-declarations: YES 00:19:40.845 Compiler for C supports arguments -Wwrite-strings: YES 00:19:40.845 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:19:40.845 Program test-lspci.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:19:40.845 Program test-linkage.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:19:40.845 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:19:40.845 Build targets in project: 8 00:19:40.845 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:19:40.845 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:19:40.845 00:19:40.845 libvfio-user 0.0.1 00:19:40.845 00:19:40.845 User defined options 00:19:40.845 buildtype : debug 00:19:40.845 default_library: static 00:19:40.845 libdir : /usr/local/lib 00:19:40.845 00:19:40.845 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:19:41.103 ninja: Entering directory `/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug' 00:19:41.103 [1/36] Compiling C object lib/libvfio-user.a.p/irq.c.o 00:19:41.103 [2/36] Compiling C object samples/lspci.p/lspci.c.o 00:19:41.103 [3/36] Compiling C object samples/null.p/null.c.o 00:19:41.103 [4/36] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:19:41.103 [5/36] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:19:41.103 [6/36] Compiling C object lib/libvfio-user.a.p/tran.c.o 00:19:41.103 [7/36] Compiling C object samples/client.p/.._lib_tran.c.o 00:19:41.103 [8/36] Compiling C object samples/client.p/.._lib_migration.c.o 00:19:41.103 [9/36] Compiling C object lib/libvfio-user.a.p/dma.c.o 00:19:41.103 [10/36] Compiling C object lib/libvfio-user.a.p/migration.c.o 00:19:41.103 [11/36] Compiling C object lib/libvfio-user.a.p/pci.c.o 00:19:41.103 [12/36] Compiling C object lib/libvfio-user.a.p/pci_caps.c.o 00:19:41.103 [13/36] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:19:41.103 [14/36] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:19:41.103 [15/36] Compiling C object lib/libvfio-user.a.p/tran_sock.c.o 00:19:41.103 [16/36] Compiling C object test/unit_tests.p/mocks.c.o 00:19:41.103 [17/36] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:19:41.103 [18/36] Compiling C object samples/server.p/server.c.o 00:19:41.103 [19/36] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:19:41.103 [20/36] Compiling C object test/unit_tests.p/unit-tests.c.o 00:19:41.103 [21/36] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:19:41.103 [22/36] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:19:41.103 [23/36] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:19:41.103 [24/36] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:19:41.103 [25/36] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:19:41.104 [26/36] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:19:41.104 [27/36] Compiling C object samples/client.p/client.c.o 00:19:41.104 [28/36] Compiling C object lib/libvfio-user.a.p/libvfio-user.c.o 00:19:41.362 [29/36] Linking static target lib/libvfio-user.a 00:19:41.362 [30/36] Linking target samples/client 00:19:41.362 [31/36] Linking target test/unit_tests 00:19:41.362 [32/36] Linking target samples/null 00:19:41.362 [33/36] Linking target samples/lspci 00:19:41.362 [34/36] Linking target samples/shadow_ioeventfd_server 00:19:41.362 [35/36] Linking target samples/gpio-pci-idio-16 00:19:41.362 [36/36] Linking target samples/server 00:19:41.362 INFO: autodetecting backend as ninja 00:19:41.362 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug 00:19:41.362 DESTDIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug 00:19:41.622 ninja: Entering directory `/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug' 00:19:41.622 ninja: no work to do. 00:19:48.179 The Meson build system 00:19:48.179 Version: 1.3.1 00:19:48.179 Source dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk 00:19:48.179 Build dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build-tmp 00:19:48.179 Build type: native build 00:19:48.179 Program cat found: YES (/usr/bin/cat) 00:19:48.179 Project name: DPDK 00:19:48.179 Project version: 24.03.0 00:19:48.179 C compiler for the host machine: clang-16 (clang 16.0.6 "clang version 16.0.6 (Fedora 16.0.6-3.fc38)") 00:19:48.179 C linker for the host machine: clang-16 ld.bfd 2.39-16 00:19:48.179 Host machine cpu family: x86_64 00:19:48.179 Host machine cpu: x86_64 00:19:48.179 Message: ## Building in Developer Mode ## 00:19:48.179 Program pkg-config found: YES (/usr/bin/pkg-config) 00:19:48.179 Program check-symbols.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:19:48.179 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:19:48.179 Program python3 found: YES (/usr/bin/python3) 00:19:48.179 Program cat found: YES (/usr/bin/cat) 00:19:48.179 Compiler for C supports arguments -march=native: YES 00:19:48.179 Checking for size of "void *" : 8 00:19:48.179 Checking for size of "void *" : 8 (cached) 00:19:48.179 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:19:48.179 Library m found: YES 00:19:48.179 Library numa found: YES 00:19:48.179 Has header "numaif.h" : YES 00:19:48.179 Library fdt found: NO 00:19:48.179 Library execinfo found: NO 00:19:48.179 Has header "execinfo.h" : YES 00:19:48.179 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:19:48.179 Run-time dependency libarchive found: NO (tried pkgconfig) 00:19:48.179 Run-time dependency libbsd found: NO (tried pkgconfig) 00:19:48.179 Run-time dependency jansson found: NO (tried pkgconfig) 00:19:48.179 Run-time dependency openssl found: YES 3.0.9 00:19:48.179 Run-time dependency libpcap found: YES 1.10.4 00:19:48.179 Has header "pcap.h" with dependency libpcap: YES 00:19:48.179 Compiler for C supports arguments -Wcast-qual: YES 00:19:48.179 Compiler for C supports arguments -Wdeprecated: YES 00:19:48.179 Compiler for C supports arguments -Wformat: YES 00:19:48.179 Compiler for C supports arguments -Wformat-nonliteral: YES 00:19:48.179 Compiler for C supports arguments -Wformat-security: YES 00:19:48.179 Compiler for C supports arguments -Wmissing-declarations: YES 00:19:48.179 Compiler for C supports arguments -Wmissing-prototypes: YES 00:19:48.179 Compiler for C supports arguments -Wnested-externs: YES 00:19:48.179 Compiler for C supports arguments -Wold-style-definition: YES 00:19:48.179 Compiler for C supports arguments -Wpointer-arith: YES 00:19:48.179 Compiler for C supports arguments -Wsign-compare: YES 00:19:48.179 Compiler for C supports arguments -Wstrict-prototypes: YES 00:19:48.179 Compiler for C supports arguments -Wundef: YES 00:19:48.179 Compiler for C supports arguments -Wwrite-strings: YES 00:19:48.179 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:19:48.180 Compiler for C supports arguments -Wno-packed-not-aligned: NO 00:19:48.180 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:19:48.180 Program objdump found: YES (/usr/bin/objdump) 00:19:48.180 Compiler for C supports arguments -mavx512f: YES 00:19:48.180 Checking if "AVX512 checking" compiles: YES 00:19:48.180 Fetching value of define "__SSE4_2__" : 1 00:19:48.180 Fetching value of define "__AES__" : 1 00:19:48.180 Fetching value of define "__AVX__" : 1 00:19:48.180 Fetching value of define "__AVX2__" : 1 00:19:48.180 Fetching value of define "__AVX512BW__" : 1 00:19:48.180 Fetching value of define "__AVX512CD__" : 1 00:19:48.180 Fetching value of define "__AVX512DQ__" : 1 00:19:48.180 Fetching value of define "__AVX512F__" : 1 00:19:48.180 Fetching value of define "__AVX512VL__" : 1 00:19:48.180 Fetching value of define "__PCLMUL__" : 1 00:19:48.180 Fetching value of define "__RDRND__" : 1 00:19:48.180 Fetching value of define "__RDSEED__" : 1 00:19:48.180 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:19:48.180 Fetching value of define "__znver1__" : (undefined) 00:19:48.180 Fetching value of define "__znver2__" : (undefined) 00:19:48.180 Fetching value of define "__znver3__" : (undefined) 00:19:48.180 Fetching value of define "__znver4__" : (undefined) 00:19:48.180 Compiler for C supports arguments -Wno-format-truncation: NO 00:19:48.180 Message: lib/log: Defining dependency "log" 00:19:48.180 Message: lib/kvargs: Defining dependency "kvargs" 00:19:48.180 Message: lib/telemetry: Defining dependency "telemetry" 00:19:48.180 Checking for function "getentropy" : NO 00:19:48.180 Message: lib/eal: Defining dependency "eal" 00:19:48.180 Message: lib/ring: Defining dependency "ring" 00:19:48.180 Message: lib/rcu: Defining dependency "rcu" 00:19:48.180 Message: lib/mempool: Defining dependency "mempool" 00:19:48.180 Message: lib/mbuf: Defining dependency "mbuf" 00:19:48.180 Fetching value of define "__PCLMUL__" : 1 (cached) 00:19:48.180 Fetching value of define "__AVX512F__" : 1 (cached) 00:19:48.180 Fetching value of define "__AVX512BW__" : 1 (cached) 00:19:48.180 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:19:48.180 Fetching value of define "__AVX512VL__" : 1 (cached) 00:19:48.180 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:19:48.180 Compiler for C supports arguments -mpclmul: YES 00:19:48.180 Compiler for C supports arguments -maes: YES 00:19:48.180 Compiler for C supports arguments -mavx512f: YES (cached) 00:19:48.180 Compiler for C supports arguments -mavx512bw: YES 00:19:48.180 Compiler for C supports arguments -mavx512dq: YES 00:19:48.180 Compiler for C supports arguments -mavx512vl: YES 00:19:48.180 Compiler for C supports arguments -mvpclmulqdq: YES 00:19:48.180 Compiler for C supports arguments -mavx2: YES 00:19:48.180 Compiler for C supports arguments -mavx: YES 00:19:48.180 Message: lib/net: Defining dependency "net" 00:19:48.180 Message: lib/meter: Defining dependency "meter" 00:19:48.180 Message: lib/ethdev: Defining dependency "ethdev" 00:19:48.180 Message: lib/pci: Defining dependency "pci" 00:19:48.180 Message: lib/cmdline: Defining dependency "cmdline" 00:19:48.180 Message: lib/hash: Defining dependency "hash" 00:19:48.180 Message: lib/timer: Defining dependency "timer" 00:19:48.180 Message: lib/compressdev: Defining dependency "compressdev" 00:19:48.180 Message: lib/cryptodev: Defining dependency "cryptodev" 00:19:48.180 Message: lib/dmadev: Defining dependency "dmadev" 00:19:48.180 Compiler for C supports arguments -Wno-cast-qual: YES 00:19:48.180 Message: lib/power: Defining dependency "power" 00:19:48.180 Message: lib/reorder: Defining dependency "reorder" 00:19:48.180 Message: lib/security: Defining dependency "security" 00:19:48.180 Has header "linux/userfaultfd.h" : YES 00:19:48.180 Has header "linux/vduse.h" : YES 00:19:48.180 Message: lib/vhost: Defining dependency "vhost" 00:19:48.180 Compiler for C supports arguments -Wno-format-truncation: NO (cached) 00:19:48.180 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:19:48.180 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:19:48.180 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:19:48.180 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:19:48.180 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:19:48.180 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:19:48.180 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:19:48.180 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:19:48.180 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:19:48.180 Program doxygen found: YES (/usr/bin/doxygen) 00:19:48.180 Configuring doxy-api-html.conf using configuration 00:19:48.180 Configuring doxy-api-man.conf using configuration 00:19:48.180 Program mandb found: YES (/usr/bin/mandb) 00:19:48.180 Program sphinx-build found: NO 00:19:48.180 Configuring rte_build_config.h using configuration 00:19:48.180 Message: 00:19:48.180 ================= 00:19:48.180 Applications Enabled 00:19:48.180 ================= 00:19:48.180 00:19:48.180 apps: 00:19:48.180 00:19:48.180 00:19:48.180 Message: 00:19:48.180 ================= 00:19:48.180 Libraries Enabled 00:19:48.180 ================= 00:19:48.180 00:19:48.180 libs: 00:19:48.180 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:19:48.180 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:19:48.180 cryptodev, dmadev, power, reorder, security, vhost, 00:19:48.180 00:19:48.180 Message: 00:19:48.180 =============== 00:19:48.180 Drivers Enabled 00:19:48.180 =============== 00:19:48.180 00:19:48.180 common: 00:19:48.180 00:19:48.180 bus: 00:19:48.180 pci, vdev, 00:19:48.180 mempool: 00:19:48.180 ring, 00:19:48.180 dma: 00:19:48.180 00:19:48.180 net: 00:19:48.180 00:19:48.180 crypto: 00:19:48.180 00:19:48.180 compress: 00:19:48.180 00:19:48.180 vdpa: 00:19:48.180 00:19:48.180 00:19:48.180 Message: 00:19:48.180 ================= 00:19:48.180 Content Skipped 00:19:48.180 ================= 00:19:48.180 00:19:48.180 apps: 00:19:48.180 dumpcap: explicitly disabled via build config 00:19:48.180 graph: explicitly disabled via build config 00:19:48.180 pdump: explicitly disabled via build config 00:19:48.180 proc-info: explicitly disabled via build config 00:19:48.180 test-acl: explicitly disabled via build config 00:19:48.180 test-bbdev: explicitly disabled via build config 00:19:48.180 test-cmdline: explicitly disabled via build config 00:19:48.180 test-compress-perf: explicitly disabled via build config 00:19:48.180 test-crypto-perf: explicitly disabled via build config 00:19:48.180 test-dma-perf: explicitly disabled via build config 00:19:48.180 test-eventdev: explicitly disabled via build config 00:19:48.180 test-fib: explicitly disabled via build config 00:19:48.180 test-flow-perf: explicitly disabled via build config 00:19:48.180 test-gpudev: explicitly disabled via build config 00:19:48.180 test-mldev: explicitly disabled via build config 00:19:48.180 test-pipeline: explicitly disabled via build config 00:19:48.180 test-pmd: explicitly disabled via build config 00:19:48.180 test-regex: explicitly disabled via build config 00:19:48.180 test-sad: explicitly disabled via build config 00:19:48.180 test-security-perf: explicitly disabled via build config 00:19:48.180 00:19:48.180 libs: 00:19:48.180 argparse: explicitly disabled via build config 00:19:48.180 metrics: explicitly disabled via build config 00:19:48.180 acl: explicitly disabled via build config 00:19:48.180 bbdev: explicitly disabled via build config 00:19:48.180 bitratestats: explicitly disabled via build config 00:19:48.180 bpf: explicitly disabled via build config 00:19:48.180 cfgfile: explicitly disabled via build config 00:19:48.180 distributor: explicitly disabled via build config 00:19:48.180 efd: explicitly disabled via build config 00:19:48.180 eventdev: explicitly disabled via build config 00:19:48.180 dispatcher: explicitly disabled via build config 00:19:48.180 gpudev: explicitly disabled via build config 00:19:48.180 gro: explicitly disabled via build config 00:19:48.180 gso: explicitly disabled via build config 00:19:48.180 ip_frag: explicitly disabled via build config 00:19:48.180 jobstats: explicitly disabled via build config 00:19:48.180 latencystats: explicitly disabled via build config 00:19:48.180 lpm: explicitly disabled via build config 00:19:48.180 member: explicitly disabled via build config 00:19:48.180 pcapng: explicitly disabled via build config 00:19:48.180 rawdev: explicitly disabled via build config 00:19:48.180 regexdev: explicitly disabled via build config 00:19:48.180 mldev: explicitly disabled via build config 00:19:48.180 rib: explicitly disabled via build config 00:19:48.180 sched: explicitly disabled via build config 00:19:48.180 stack: explicitly disabled via build config 00:19:48.180 ipsec: explicitly disabled via build config 00:19:48.180 pdcp: explicitly disabled via build config 00:19:48.180 fib: explicitly disabled via build config 00:19:48.180 port: explicitly disabled via build config 00:19:48.180 pdump: explicitly disabled via build config 00:19:48.180 table: explicitly disabled via build config 00:19:48.180 pipeline: explicitly disabled via build config 00:19:48.180 graph: explicitly disabled via build config 00:19:48.180 node: explicitly disabled via build config 00:19:48.180 00:19:48.180 drivers: 00:19:48.180 common/cpt: not in enabled drivers build config 00:19:48.180 common/dpaax: not in enabled drivers build config 00:19:48.180 common/iavf: not in enabled drivers build config 00:19:48.180 common/idpf: not in enabled drivers build config 00:19:48.181 common/ionic: not in enabled drivers build config 00:19:48.181 common/mvep: not in enabled drivers build config 00:19:48.181 common/octeontx: not in enabled drivers build config 00:19:48.181 bus/auxiliary: not in enabled drivers build config 00:19:48.181 bus/cdx: not in enabled drivers build config 00:19:48.181 bus/dpaa: not in enabled drivers build config 00:19:48.181 bus/fslmc: not in enabled drivers build config 00:19:48.181 bus/ifpga: not in enabled drivers build config 00:19:48.181 bus/platform: not in enabled drivers build config 00:19:48.181 bus/uacce: not in enabled drivers build config 00:19:48.181 bus/vmbus: not in enabled drivers build config 00:19:48.181 common/cnxk: not in enabled drivers build config 00:19:48.181 common/mlx5: not in enabled drivers build config 00:19:48.181 common/nfp: not in enabled drivers build config 00:19:48.181 common/nitrox: not in enabled drivers build config 00:19:48.181 common/qat: not in enabled drivers build config 00:19:48.181 common/sfc_efx: not in enabled drivers build config 00:19:48.181 mempool/bucket: not in enabled drivers build config 00:19:48.181 mempool/cnxk: not in enabled drivers build config 00:19:48.181 mempool/dpaa: not in enabled drivers build config 00:19:48.181 mempool/dpaa2: not in enabled drivers build config 00:19:48.181 mempool/octeontx: not in enabled drivers build config 00:19:48.181 mempool/stack: not in enabled drivers build config 00:19:48.181 dma/cnxk: not in enabled drivers build config 00:19:48.181 dma/dpaa: not in enabled drivers build config 00:19:48.181 dma/dpaa2: not in enabled drivers build config 00:19:48.181 dma/hisilicon: not in enabled drivers build config 00:19:48.181 dma/idxd: not in enabled drivers build config 00:19:48.181 dma/ioat: not in enabled drivers build config 00:19:48.181 dma/skeleton: not in enabled drivers build config 00:19:48.181 net/af_packet: not in enabled drivers build config 00:19:48.181 net/af_xdp: not in enabled drivers build config 00:19:48.181 net/ark: not in enabled drivers build config 00:19:48.181 net/atlantic: not in enabled drivers build config 00:19:48.181 net/avp: not in enabled drivers build config 00:19:48.181 net/axgbe: not in enabled drivers build config 00:19:48.181 net/bnx2x: not in enabled drivers build config 00:19:48.181 net/bnxt: not in enabled drivers build config 00:19:48.181 net/bonding: not in enabled drivers build config 00:19:48.181 net/cnxk: not in enabled drivers build config 00:19:48.181 net/cpfl: not in enabled drivers build config 00:19:48.181 net/cxgbe: not in enabled drivers build config 00:19:48.181 net/dpaa: not in enabled drivers build config 00:19:48.181 net/dpaa2: not in enabled drivers build config 00:19:48.181 net/e1000: not in enabled drivers build config 00:19:48.181 net/ena: not in enabled drivers build config 00:19:48.181 net/enetc: not in enabled drivers build config 00:19:48.181 net/enetfec: not in enabled drivers build config 00:19:48.181 net/enic: not in enabled drivers build config 00:19:48.181 net/failsafe: not in enabled drivers build config 00:19:48.181 net/fm10k: not in enabled drivers build config 00:19:48.181 net/gve: not in enabled drivers build config 00:19:48.181 net/hinic: not in enabled drivers build config 00:19:48.181 net/hns3: not in enabled drivers build config 00:19:48.181 net/i40e: not in enabled drivers build config 00:19:48.181 net/iavf: not in enabled drivers build config 00:19:48.181 net/ice: not in enabled drivers build config 00:19:48.181 net/idpf: not in enabled drivers build config 00:19:48.181 net/igc: not in enabled drivers build config 00:19:48.181 net/ionic: not in enabled drivers build config 00:19:48.181 net/ipn3ke: not in enabled drivers build config 00:19:48.181 net/ixgbe: not in enabled drivers build config 00:19:48.181 net/mana: not in enabled drivers build config 00:19:48.181 net/memif: not in enabled drivers build config 00:19:48.181 net/mlx4: not in enabled drivers build config 00:19:48.181 net/mlx5: not in enabled drivers build config 00:19:48.181 net/mvneta: not in enabled drivers build config 00:19:48.181 net/mvpp2: not in enabled drivers build config 00:19:48.181 net/netvsc: not in enabled drivers build config 00:19:48.181 net/nfb: not in enabled drivers build config 00:19:48.181 net/nfp: not in enabled drivers build config 00:19:48.181 net/ngbe: not in enabled drivers build config 00:19:48.181 net/null: not in enabled drivers build config 00:19:48.181 net/octeontx: not in enabled drivers build config 00:19:48.181 net/octeon_ep: not in enabled drivers build config 00:19:48.181 net/pcap: not in enabled drivers build config 00:19:48.181 net/pfe: not in enabled drivers build config 00:19:48.181 net/qede: not in enabled drivers build config 00:19:48.181 net/ring: not in enabled drivers build config 00:19:48.181 net/sfc: not in enabled drivers build config 00:19:48.181 net/softnic: not in enabled drivers build config 00:19:48.181 net/tap: not in enabled drivers build config 00:19:48.181 net/thunderx: not in enabled drivers build config 00:19:48.181 net/txgbe: not in enabled drivers build config 00:19:48.181 net/vdev_netvsc: not in enabled drivers build config 00:19:48.181 net/vhost: not in enabled drivers build config 00:19:48.181 net/virtio: not in enabled drivers build config 00:19:48.181 net/vmxnet3: not in enabled drivers build config 00:19:48.181 raw/*: missing internal dependency, "rawdev" 00:19:48.181 crypto/armv8: not in enabled drivers build config 00:19:48.181 crypto/bcmfs: not in enabled drivers build config 00:19:48.181 crypto/caam_jr: not in enabled drivers build config 00:19:48.181 crypto/ccp: not in enabled drivers build config 00:19:48.181 crypto/cnxk: not in enabled drivers build config 00:19:48.181 crypto/dpaa_sec: not in enabled drivers build config 00:19:48.181 crypto/dpaa2_sec: not in enabled drivers build config 00:19:48.181 crypto/ipsec_mb: not in enabled drivers build config 00:19:48.181 crypto/mlx5: not in enabled drivers build config 00:19:48.181 crypto/mvsam: not in enabled drivers build config 00:19:48.181 crypto/nitrox: not in enabled drivers build config 00:19:48.181 crypto/null: not in enabled drivers build config 00:19:48.181 crypto/octeontx: not in enabled drivers build config 00:19:48.181 crypto/openssl: not in enabled drivers build config 00:19:48.181 crypto/scheduler: not in enabled drivers build config 00:19:48.181 crypto/uadk: not in enabled drivers build config 00:19:48.181 crypto/virtio: not in enabled drivers build config 00:19:48.181 compress/isal: not in enabled drivers build config 00:19:48.181 compress/mlx5: not in enabled drivers build config 00:19:48.181 compress/nitrox: not in enabled drivers build config 00:19:48.181 compress/octeontx: not in enabled drivers build config 00:19:48.181 compress/zlib: not in enabled drivers build config 00:19:48.181 regex/*: missing internal dependency, "regexdev" 00:19:48.181 ml/*: missing internal dependency, "mldev" 00:19:48.181 vdpa/ifc: not in enabled drivers build config 00:19:48.181 vdpa/mlx5: not in enabled drivers build config 00:19:48.181 vdpa/nfp: not in enabled drivers build config 00:19:48.181 vdpa/sfc: not in enabled drivers build config 00:19:48.181 event/*: missing internal dependency, "eventdev" 00:19:48.181 baseband/*: missing internal dependency, "bbdev" 00:19:48.181 gpu/*: missing internal dependency, "gpudev" 00:19:48.181 00:19:48.181 00:19:48.181 Build targets in project: 85 00:19:48.181 00:19:48.181 DPDK 24.03.0 00:19:48.181 00:19:48.181 User defined options 00:19:48.181 buildtype : debug 00:19:48.181 default_library : static 00:19:48.181 libdir : lib 00:19:48.181 prefix : /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:19:48.181 c_args : -fPIC -Werror 00:19:48.181 c_link_args : 00:19:48.181 cpu_instruction_set: native 00:19:48.181 disable_apps : test-sad,graph,test-regex,dumpcap,test-eventdev,test-compress-perf,pdump,test-security-perf,test-pmd,test-flow-perf,test-pipeline,test-crypto-perf,test-gpudev,test-cmdline,test-dma-perf,proc-info,test-bbdev,test-acl,test,test-mldev,test-fib 00:19:48.181 disable_libs : sched,port,dispatcher,graph,rawdev,pdcp,bitratestats,ipsec,pcapng,pdump,gso,cfgfile,gpudev,ip_frag,node,distributor,mldev,lpm,acl,bpf,latencystats,eventdev,regexdev,gro,stack,fib,argparse,pipeline,bbdev,table,metrics,member,jobstats,efd,rib 00:19:48.181 enable_docs : false 00:19:48.181 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:19:48.181 enable_kmods : false 00:19:48.181 tests : false 00:19:48.181 00:19:48.181 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:19:48.181 ninja: Entering directory `/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build-tmp' 00:19:48.181 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:19:48.181 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:19:48.181 [3/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:19:48.181 [4/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:19:48.181 [5/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:19:48.181 [6/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:19:48.181 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:19:48.181 [8/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:19:48.181 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:19:48.181 [10/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:19:48.181 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:19:48.181 [12/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:19:48.181 [13/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:19:48.181 [14/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:19:48.181 [15/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:19:48.181 [16/268] Linking static target lib/librte_kvargs.a 00:19:48.181 [17/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:19:48.181 [18/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:19:48.181 [19/268] Linking static target lib/librte_log.a 00:19:48.438 [20/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:19:48.438 [21/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:19:48.439 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:19:48.439 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:19:48.439 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:19:48.439 [25/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:19:48.439 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:19:48.439 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:19:48.696 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:19:48.696 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:19:48.696 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:19:48.696 [31/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:19:48.696 [32/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:19:48.696 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:19:48.696 [34/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:19:48.696 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:19:48.696 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:19:48.696 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:19:48.696 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:19:48.696 [39/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:19:48.696 [40/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:19:48.696 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:19:48.696 [42/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:19:48.696 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:19:48.696 [44/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:19:48.696 [45/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:19:48.696 [46/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:19:48.696 [47/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:19:48.696 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:19:48.696 [49/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:19:48.696 [50/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:19:48.696 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:19:48.696 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:19:48.696 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:19:48.696 [54/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:19:48.696 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:19:48.696 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:19:48.696 [57/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:19:48.696 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:19:48.696 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:19:48.696 [60/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:19:48.696 [61/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:19:48.696 [62/268] Linking static target lib/librte_telemetry.a 00:19:48.696 [63/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:19:48.696 [64/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:19:48.696 [65/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:19:48.696 [66/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:19:48.696 [67/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:19:48.696 [68/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:19:48.696 [69/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:19:48.696 [70/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:19:48.696 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:19:48.696 [72/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:19:48.696 [73/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:19:48.696 [74/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:19:48.696 [75/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:19:48.696 [76/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:19:48.696 [77/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:19:48.696 [78/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:19:48.696 [79/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:19:48.696 [80/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:19:48.696 [81/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:19:48.696 [82/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:19:48.697 [83/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:19:48.697 [84/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:19:48.697 [85/268] Linking static target lib/librte_pci.a 00:19:48.697 [86/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:19:48.697 [87/268] Linking static target lib/librte_ring.a 00:19:48.697 [88/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:19:48.697 [89/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:19:48.697 [90/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:19:48.697 [91/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:19:48.697 [92/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:19:48.697 [93/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:19:48.697 [94/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:19:48.697 [95/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:19:48.697 [96/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:19:48.697 [97/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:19:48.697 [98/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:19:48.697 [99/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:19:48.697 [100/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:19:48.697 [101/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:19:48.697 [102/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:19:48.697 [103/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:19:48.697 [104/268] Linking static target lib/librte_eal.a 00:19:48.697 [105/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:19:48.697 [106/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:19:48.697 [107/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:19:48.697 [108/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:19:48.955 [109/268] Linking static target lib/librte_rcu.a 00:19:48.955 [110/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:19:48.955 [111/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:19:48.955 [112/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:19:48.955 [113/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:19:48.955 [114/268] Linking static target lib/librte_mempool.a 00:19:48.955 [115/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:19:48.955 [116/268] Linking target lib/librte_log.so.24.1 00:19:48.955 [117/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:19:48.955 [118/268] Linking static target lib/librte_mbuf.a 00:19:48.955 [119/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:19:48.955 [120/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:19:48.955 [121/268] Linking static target lib/librte_net.a 00:19:48.955 [122/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:19:48.955 [123/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:19:49.215 [124/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:19:49.215 [125/268] Linking target lib/librte_kvargs.so.24.1 00:19:49.215 [126/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:19:49.215 [127/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:19:49.215 [128/268] Linking static target lib/librte_meter.a 00:19:49.215 [129/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:19:49.215 [130/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:19:49.215 [131/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:19:49.215 [132/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:19:49.215 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:19:49.215 [134/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:19:49.215 [135/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:19:49.215 [136/268] Linking target lib/librte_telemetry.so.24.1 00:19:49.215 [137/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:19:49.215 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:19:49.215 [139/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:19:49.215 [140/268] Linking static target lib/librte_timer.a 00:19:49.215 [141/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:19:49.215 [142/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:19:49.215 [143/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:19:49.215 [144/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:19:49.215 [145/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:19:49.215 [146/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:19:49.215 [147/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:19:49.215 [148/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:19:49.215 [149/268] Linking static target lib/librte_cmdline.a 00:19:49.215 [150/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:19:49.215 [151/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:19:49.215 [152/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:19:49.215 [153/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:19:49.215 [154/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:19:49.215 [155/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:19:49.215 [156/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:19:49.215 [157/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:19:49.215 [158/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:19:49.215 [159/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:19:49.215 [160/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:19:49.215 [161/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:19:49.475 [162/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:19:49.475 [163/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:19:49.475 [164/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:19:49.475 [165/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:19:49.475 [166/268] Linking static target lib/librte_dmadev.a 00:19:49.475 [167/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:19:49.475 [168/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:19:49.475 [169/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:19:49.475 [170/268] Linking static target lib/librte_compressdev.a 00:19:49.475 [171/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:19:49.475 [172/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:19:49.475 [173/268] Linking static target lib/librte_power.a 00:19:49.475 [174/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:19:49.475 [175/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:19:49.475 [176/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:19:49.475 [177/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:19:49.475 [178/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:19:49.475 [179/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:19:49.475 [180/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:19:49.475 [181/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:19:49.475 [182/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:19:49.475 [183/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:19:49.475 [184/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:19:49.475 [185/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:19:49.475 [186/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:19:49.475 [187/268] Linking static target lib/librte_reorder.a 00:19:49.475 [188/268] Linking static target lib/librte_security.a 00:19:49.475 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:19:49.475 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:19:49.475 [191/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:19:49.475 [192/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:19:49.475 [193/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:19:49.476 [194/268] Linking static target lib/librte_hash.a 00:19:49.476 [195/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:19:49.476 [196/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:19:49.735 [197/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:19:49.735 [198/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:19:49.735 [199/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:19:49.735 [200/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:19:49.735 [201/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:19:49.735 [202/268] Linking static target drivers/librte_bus_vdev.a 00:19:49.735 [203/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:19:49.735 [204/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:19:49.735 [205/268] Linking static target lib/librte_cryptodev.a 00:19:49.735 [206/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:19:49.735 [207/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:19:49.735 [208/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:19:49.735 [209/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:19:49.735 [210/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:19:49.735 [211/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:19:49.735 [212/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:19:49.735 [213/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:19:49.735 [214/268] Linking static target drivers/librte_mempool_ring.a 00:19:49.736 [215/268] Linking static target drivers/librte_bus_pci.a 00:19:49.995 [216/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:19:49.995 [217/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:19:49.995 [218/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:19:49.995 [219/268] Linking static target lib/librte_ethdev.a 00:19:49.995 [220/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:19:49.995 [221/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:19:49.995 [222/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:19:49.995 [223/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:19:50.255 [224/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:19:50.522 [225/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:19:50.522 [226/268] Linking static target lib/librte_vhost.a 00:19:50.522 [227/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:19:50.522 [228/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:19:50.780 [229/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:19:52.160 [230/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:19:52.728 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:19:59.296 [232/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:20:01.833 [233/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:20:01.833 [234/268] Linking target lib/librte_eal.so.24.1 00:20:01.833 [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:20:01.833 [236/268] Linking target lib/librte_timer.so.24.1 00:20:01.833 [237/268] Linking target lib/librte_meter.so.24.1 00:20:01.833 [238/268] Linking target lib/librte_ring.so.24.1 00:20:01.833 [239/268] Linking target lib/librte_pci.so.24.1 00:20:01.833 [240/268] Linking target drivers/librte_bus_vdev.so.24.1 00:20:01.833 [241/268] Linking target lib/librte_dmadev.so.24.1 00:20:02.093 [242/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:20:02.093 [243/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:20:02.093 [244/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:20:02.093 [245/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:20:02.093 [246/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:20:02.093 [247/268] Linking target lib/librte_rcu.so.24.1 00:20:02.093 [248/268] Linking target lib/librte_mempool.so.24.1 00:20:02.093 [249/268] Linking target drivers/librte_bus_pci.so.24.1 00:20:02.093 [250/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:20:02.093 [251/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:20:02.354 [252/268] Linking target drivers/librte_mempool_ring.so.24.1 00:20:02.354 [253/268] Linking target lib/librte_mbuf.so.24.1 00:20:02.354 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:20:02.642 [255/268] Linking target lib/librte_reorder.so.24.1 00:20:02.642 [256/268] Linking target lib/librte_compressdev.so.24.1 00:20:02.642 [257/268] Linking target lib/librte_net.so.24.1 00:20:02.642 [258/268] Linking target lib/librte_cryptodev.so.24.1 00:20:02.642 [259/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:20:02.642 [260/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:20:02.642 [261/268] Linking target lib/librte_security.so.24.1 00:20:02.642 [262/268] Linking target lib/librte_cmdline.so.24.1 00:20:02.642 [263/268] Linking target lib/librte_hash.so.24.1 00:20:02.642 [264/268] Linking target lib/librte_ethdev.so.24.1 00:20:02.932 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:20:02.932 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:20:02.932 [267/268] Linking target lib/librte_power.so.24.1 00:20:02.932 [268/268] Linking target lib/librte_vhost.so.24.1 00:20:02.932 INFO: autodetecting backend as ninja 00:20:02.932 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build-tmp -j 72 00:20:03.883 CC lib/ut_mock/mock.o 00:20:03.883 CC lib/ut/ut.o 00:20:03.883 CC lib/log/log.o 00:20:03.883 CC lib/log/log_flags.o 00:20:03.883 CC lib/log/log_deprecated.o 00:20:04.143 LIB libspdk_log.a 00:20:04.143 LIB libspdk_ut.a 00:20:04.143 LIB libspdk_ut_mock.a 00:20:04.401 CXX lib/trace_parser/trace.o 00:20:04.401 CC lib/dma/dma.o 00:20:04.401 CC lib/ioat/ioat.o 00:20:04.401 CC lib/util/base64.o 00:20:04.401 CC lib/util/bit_array.o 00:20:04.401 CC lib/util/crc16.o 00:20:04.401 CC lib/util/cpuset.o 00:20:04.401 CC lib/util/crc32.o 00:20:04.401 CC lib/util/crc32c.o 00:20:04.401 CC lib/util/crc32_ieee.o 00:20:04.401 CC lib/util/crc64.o 00:20:04.401 CC lib/util/dif.o 00:20:04.401 CC lib/util/fd.o 00:20:04.401 CC lib/util/iov.o 00:20:04.401 CC lib/util/file.o 00:20:04.401 CC lib/util/pipe.o 00:20:04.401 CC lib/util/hexlify.o 00:20:04.401 CC lib/util/math.o 00:20:04.401 CC lib/util/strerror_tls.o 00:20:04.401 CC lib/util/string.o 00:20:04.401 CC lib/util/uuid.o 00:20:04.401 CC lib/util/fd_group.o 00:20:04.401 CC lib/util/xor.o 00:20:04.401 CC lib/util/zipf.o 00:20:04.401 CC lib/vfio_user/host/vfio_user_pci.o 00:20:04.401 CC lib/vfio_user/host/vfio_user.o 00:20:04.660 LIB libspdk_dma.a 00:20:04.660 LIB libspdk_ioat.a 00:20:04.660 LIB libspdk_vfio_user.a 00:20:04.660 LIB libspdk_util.a 00:20:04.919 LIB libspdk_trace_parser.a 00:20:04.919 CC lib/vmd/vmd.o 00:20:04.919 CC lib/vmd/led.o 00:20:04.919 CC lib/conf/conf.o 00:20:05.177 CC lib/idxd/idxd.o 00:20:05.177 CC lib/idxd/idxd_user.o 00:20:05.177 CC lib/idxd/idxd_kernel.o 00:20:05.177 CC lib/rdma/common.o 00:20:05.177 CC lib/env_dpdk/memory.o 00:20:05.177 CC lib/env_dpdk/env.o 00:20:05.177 CC lib/env_dpdk/pci.o 00:20:05.177 CC lib/rdma/rdma_verbs.o 00:20:05.177 CC lib/env_dpdk/init.o 00:20:05.177 CC lib/env_dpdk/threads.o 00:20:05.177 CC lib/json/json_parse.o 00:20:05.177 CC lib/env_dpdk/pci_ioat.o 00:20:05.177 CC lib/json/json_util.o 00:20:05.177 CC lib/env_dpdk/pci_virtio.o 00:20:05.177 CC lib/json/json_write.o 00:20:05.177 CC lib/env_dpdk/pci_vmd.o 00:20:05.177 CC lib/env_dpdk/pci_idxd.o 00:20:05.177 CC lib/env_dpdk/pci_event.o 00:20:05.177 CC lib/env_dpdk/sigbus_handler.o 00:20:05.177 CC lib/env_dpdk/pci_dpdk.o 00:20:05.177 CC lib/env_dpdk/pci_dpdk_2207.o 00:20:05.177 CC lib/env_dpdk/pci_dpdk_2211.o 00:20:05.177 LIB libspdk_conf.a 00:20:05.177 LIB libspdk_rdma.a 00:20:05.177 LIB libspdk_json.a 00:20:05.435 LIB libspdk_idxd.a 00:20:05.435 LIB libspdk_vmd.a 00:20:05.693 CC lib/jsonrpc/jsonrpc_server.o 00:20:05.693 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:20:05.693 CC lib/jsonrpc/jsonrpc_client.o 00:20:05.693 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:20:05.693 LIB libspdk_jsonrpc.a 00:20:05.951 LIB libspdk_env_dpdk.a 00:20:05.951 CC lib/rpc/rpc.o 00:20:06.208 LIB libspdk_rpc.a 00:20:06.466 CC lib/trace/trace.o 00:20:06.466 CC lib/trace/trace_rpc.o 00:20:06.466 CC lib/trace/trace_flags.o 00:20:06.466 CC lib/keyring/keyring.o 00:20:06.466 CC lib/keyring/keyring_rpc.o 00:20:06.466 CC lib/notify/notify.o 00:20:06.466 CC lib/notify/notify_rpc.o 00:20:06.724 LIB libspdk_trace.a 00:20:06.724 LIB libspdk_notify.a 00:20:06.724 LIB libspdk_keyring.a 00:20:06.983 CC lib/sock/sock.o 00:20:06.983 CC lib/sock/sock_rpc.o 00:20:06.983 CC lib/thread/thread.o 00:20:06.983 CC lib/thread/iobuf.o 00:20:07.242 LIB libspdk_sock.a 00:20:07.500 CC lib/nvme/nvme_ctrlr_cmd.o 00:20:07.500 CC lib/nvme/nvme_ctrlr.o 00:20:07.500 CC lib/nvme/nvme_fabric.o 00:20:07.500 CC lib/nvme/nvme_ns_cmd.o 00:20:07.500 CC lib/nvme/nvme_pcie.o 00:20:07.500 CC lib/nvme/nvme_ns.o 00:20:07.500 CC lib/nvme/nvme_pcie_common.o 00:20:07.500 CC lib/nvme/nvme_qpair.o 00:20:07.500 CC lib/nvme/nvme.o 00:20:07.500 CC lib/nvme/nvme_quirks.o 00:20:07.500 CC lib/nvme/nvme_transport.o 00:20:07.500 CC lib/nvme/nvme_discovery.o 00:20:07.500 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:20:07.500 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:20:07.500 CC lib/nvme/nvme_tcp.o 00:20:07.500 CC lib/nvme/nvme_poll_group.o 00:20:07.500 CC lib/nvme/nvme_opal.o 00:20:07.500 CC lib/nvme/nvme_io_msg.o 00:20:07.500 CC lib/nvme/nvme_auth.o 00:20:07.500 CC lib/nvme/nvme_zns.o 00:20:07.500 CC lib/nvme/nvme_cuse.o 00:20:07.500 CC lib/nvme/nvme_stubs.o 00:20:07.500 CC lib/nvme/nvme_vfio_user.o 00:20:07.500 CC lib/nvme/nvme_rdma.o 00:20:07.758 LIB libspdk_thread.a 00:20:08.016 CC lib/vfu_tgt/tgt_endpoint.o 00:20:08.016 CC lib/vfu_tgt/tgt_rpc.o 00:20:08.016 CC lib/virtio/virtio.o 00:20:08.016 CC lib/blob/blobstore.o 00:20:08.016 CC lib/blob/request.o 00:20:08.016 CC lib/virtio/virtio_vhost_user.o 00:20:08.016 CC lib/virtio/virtio_vfio_user.o 00:20:08.016 CC lib/blob/zeroes.o 00:20:08.016 CC lib/virtio/virtio_pci.o 00:20:08.016 CC lib/accel/accel.o 00:20:08.016 CC lib/blob/blob_bs_dev.o 00:20:08.016 CC lib/accel/accel_rpc.o 00:20:08.016 CC lib/accel/accel_sw.o 00:20:08.016 CC lib/init/json_config.o 00:20:08.016 CC lib/init/subsystem_rpc.o 00:20:08.016 CC lib/init/subsystem.o 00:20:08.016 CC lib/init/rpc.o 00:20:08.275 LIB libspdk_init.a 00:20:08.275 LIB libspdk_vfu_tgt.a 00:20:08.275 LIB libspdk_virtio.a 00:20:08.534 CC lib/event/app.o 00:20:08.534 CC lib/event/reactor.o 00:20:08.534 CC lib/event/app_rpc.o 00:20:08.534 CC lib/event/log_rpc.o 00:20:08.534 CC lib/event/scheduler_static.o 00:20:08.792 LIB libspdk_accel.a 00:20:08.792 LIB libspdk_event.a 00:20:08.792 LIB libspdk_nvme.a 00:20:09.051 CC lib/bdev/bdev.o 00:20:09.051 CC lib/bdev/bdev_rpc.o 00:20:09.051 CC lib/bdev/bdev_zone.o 00:20:09.051 CC lib/bdev/part.o 00:20:09.051 CC lib/bdev/scsi_nvme.o 00:20:09.986 LIB libspdk_blob.a 00:20:10.243 CC lib/lvol/lvol.o 00:20:10.243 CC lib/blobfs/blobfs.o 00:20:10.243 CC lib/blobfs/tree.o 00:20:10.811 LIB libspdk_lvol.a 00:20:10.811 LIB libspdk_blobfs.a 00:20:10.811 LIB libspdk_bdev.a 00:20:11.070 CC lib/nvmf/ctrlr.o 00:20:11.070 CC lib/nvmf/ctrlr_discovery.o 00:20:11.070 CC lib/nvmf/ctrlr_bdev.o 00:20:11.070 CC lib/nvmf/nvmf.o 00:20:11.070 CC lib/nvmf/subsystem.o 00:20:11.070 CC lib/nvmf/nvmf_rpc.o 00:20:11.070 CC lib/scsi/dev.o 00:20:11.070 CC lib/nvmf/transport.o 00:20:11.070 CC lib/nvmf/tcp.o 00:20:11.070 CC lib/nvmf/stubs.o 00:20:11.070 CC lib/scsi/lun.o 00:20:11.070 CC lib/nvmf/mdns_server.o 00:20:11.070 CC lib/scsi/port.o 00:20:11.070 CC lib/nvmf/vfio_user.o 00:20:11.070 CC lib/nvmf/rdma.o 00:20:11.070 CC lib/scsi/scsi.o 00:20:11.070 CC lib/scsi/scsi_bdev.o 00:20:11.070 CC lib/nvmf/auth.o 00:20:11.070 CC lib/scsi/scsi_pr.o 00:20:11.070 CC lib/scsi/scsi_rpc.o 00:20:11.336 CC lib/ublk/ublk.o 00:20:11.336 CC lib/ublk/ublk_rpc.o 00:20:11.336 CC lib/scsi/task.o 00:20:11.336 CC lib/ftl/ftl_core.o 00:20:11.336 CC lib/ftl/ftl_debug.o 00:20:11.336 CC lib/ftl/ftl_layout.o 00:20:11.336 CC lib/ftl/ftl_init.o 00:20:11.336 CC lib/ftl/ftl_io.o 00:20:11.336 CC lib/ftl/ftl_sb.o 00:20:11.336 CC lib/ftl/ftl_l2p.o 00:20:11.336 CC lib/ftl/ftl_l2p_flat.o 00:20:11.336 CC lib/ftl/ftl_band.o 00:20:11.336 CC lib/ftl/ftl_nv_cache.o 00:20:11.336 CC lib/ftl/ftl_writer.o 00:20:11.336 CC lib/ftl/ftl_band_ops.o 00:20:11.336 CC lib/nbd/nbd_rpc.o 00:20:11.336 CC lib/ftl/ftl_rq.o 00:20:11.336 CC lib/nbd/nbd.o 00:20:11.336 CC lib/ftl/ftl_l2p_cache.o 00:20:11.336 CC lib/ftl/ftl_reloc.o 00:20:11.336 CC lib/ftl/mngt/ftl_mngt.o 00:20:11.336 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:20:11.336 CC lib/ftl/ftl_p2l.o 00:20:11.336 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:20:11.336 CC lib/ftl/mngt/ftl_mngt_md.o 00:20:11.336 CC lib/ftl/mngt/ftl_mngt_startup.o 00:20:11.336 CC lib/ftl/mngt/ftl_mngt_misc.o 00:20:11.336 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:20:11.336 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:20:11.336 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:20:11.336 CC lib/ftl/mngt/ftl_mngt_band.o 00:20:11.336 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:20:11.336 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:20:11.336 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:20:11.336 CC lib/ftl/utils/ftl_conf.o 00:20:11.336 CC lib/ftl/utils/ftl_md.o 00:20:11.336 CC lib/ftl/utils/ftl_mempool.o 00:20:11.336 CC lib/ftl/utils/ftl_bitmap.o 00:20:11.336 CC lib/ftl/utils/ftl_property.o 00:20:11.336 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:20:11.336 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:20:11.336 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:20:11.336 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:20:11.336 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:20:11.336 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:20:11.336 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:20:11.336 CC lib/ftl/upgrade/ftl_sb_v3.o 00:20:11.336 CC lib/ftl/upgrade/ftl_sb_v5.o 00:20:11.336 CC lib/ftl/nvc/ftl_nvc_dev.o 00:20:11.336 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:20:11.336 CC lib/ftl/base/ftl_base_dev.o 00:20:11.336 CC lib/ftl/base/ftl_base_bdev.o 00:20:11.336 CC lib/ftl/ftl_trace.o 00:20:11.594 LIB libspdk_nbd.a 00:20:11.852 LIB libspdk_scsi.a 00:20:11.852 LIB libspdk_ublk.a 00:20:12.110 LIB libspdk_ftl.a 00:20:12.110 CC lib/vhost/vhost_rpc.o 00:20:12.110 CC lib/vhost/vhost.o 00:20:12.110 CC lib/vhost/vhost_blk.o 00:20:12.110 CC lib/vhost/vhost_scsi.o 00:20:12.110 CC lib/vhost/rte_vhost_user.o 00:20:12.110 CC lib/iscsi/init_grp.o 00:20:12.110 CC lib/iscsi/conn.o 00:20:12.110 CC lib/iscsi/iscsi.o 00:20:12.110 CC lib/iscsi/md5.o 00:20:12.110 CC lib/iscsi/param.o 00:20:12.110 CC lib/iscsi/portal_grp.o 00:20:12.110 CC lib/iscsi/tgt_node.o 00:20:12.110 CC lib/iscsi/iscsi_subsystem.o 00:20:12.110 CC lib/iscsi/iscsi_rpc.o 00:20:12.110 CC lib/iscsi/task.o 00:20:12.676 LIB libspdk_nvmf.a 00:20:12.676 LIB libspdk_vhost.a 00:20:12.934 LIB libspdk_iscsi.a 00:20:13.499 CC module/env_dpdk/env_dpdk_rpc.o 00:20:13.499 CC module/vfu_device/vfu_virtio_blk.o 00:20:13.499 CC module/vfu_device/vfu_virtio.o 00:20:13.499 CC module/vfu_device/vfu_virtio_scsi.o 00:20:13.499 CC module/vfu_device/vfu_virtio_rpc.o 00:20:13.499 CC module/accel/ioat/accel_ioat.o 00:20:13.499 CC module/accel/ioat/accel_ioat_rpc.o 00:20:13.499 CC module/accel/dsa/accel_dsa.o 00:20:13.499 CC module/accel/dsa/accel_dsa_rpc.o 00:20:13.499 CC module/accel/error/accel_error_rpc.o 00:20:13.499 CC module/accel/error/accel_error.o 00:20:13.499 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:20:13.499 CC module/accel/iaa/accel_iaa.o 00:20:13.499 CC module/accel/iaa/accel_iaa_rpc.o 00:20:13.499 LIB libspdk_env_dpdk_rpc.a 00:20:13.499 CC module/sock/posix/posix.o 00:20:13.499 CC module/scheduler/dynamic/scheduler_dynamic.o 00:20:13.499 CC module/scheduler/gscheduler/gscheduler.o 00:20:13.499 CC module/blob/bdev/blob_bdev.o 00:20:13.499 CC module/keyring/file/keyring.o 00:20:13.499 CC module/keyring/file/keyring_rpc.o 00:20:13.499 CC module/keyring/linux/keyring.o 00:20:13.499 CC module/keyring/linux/keyring_rpc.o 00:20:13.499 LIB libspdk_scheduler_dpdk_governor.a 00:20:13.499 LIB libspdk_accel_ioat.a 00:20:13.499 LIB libspdk_scheduler_gscheduler.a 00:20:13.758 LIB libspdk_keyring_file.a 00:20:13.758 LIB libspdk_accel_error.a 00:20:13.758 LIB libspdk_keyring_linux.a 00:20:13.758 LIB libspdk_scheduler_dynamic.a 00:20:13.758 LIB libspdk_accel_iaa.a 00:20:13.758 LIB libspdk_accel_dsa.a 00:20:13.758 LIB libspdk_blob_bdev.a 00:20:13.758 LIB libspdk_vfu_device.a 00:20:14.016 LIB libspdk_sock_posix.a 00:20:14.016 CC module/bdev/malloc/bdev_malloc.o 00:20:14.016 CC module/bdev/malloc/bdev_malloc_rpc.o 00:20:14.016 CC module/bdev/split/vbdev_split.o 00:20:14.016 CC module/bdev/split/vbdev_split_rpc.o 00:20:14.016 CC module/bdev/nvme/bdev_nvme_rpc.o 00:20:14.016 CC module/bdev/nvme/nvme_rpc.o 00:20:14.016 CC module/bdev/nvme/bdev_nvme.o 00:20:14.016 CC module/bdev/nvme/vbdev_opal.o 00:20:14.016 CC module/bdev/nvme/bdev_mdns_client.o 00:20:14.016 CC module/bdev/delay/vbdev_delay.o 00:20:14.016 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:20:14.016 CC module/bdev/delay/vbdev_delay_rpc.o 00:20:14.016 CC module/bdev/nvme/vbdev_opal_rpc.o 00:20:14.016 CC module/bdev/null/bdev_null.o 00:20:14.016 CC module/bdev/null/bdev_null_rpc.o 00:20:14.016 CC module/bdev/passthru/vbdev_passthru.o 00:20:14.016 CC module/bdev/iscsi/bdev_iscsi.o 00:20:14.016 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:20:14.016 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:20:14.016 CC module/bdev/error/vbdev_error_rpc.o 00:20:14.016 CC module/bdev/error/vbdev_error.o 00:20:14.016 CC module/bdev/gpt/gpt.o 00:20:14.016 CC module/bdev/gpt/vbdev_gpt.o 00:20:14.017 CC module/blobfs/bdev/blobfs_bdev.o 00:20:14.017 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:20:14.017 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:20:14.017 CC module/bdev/zone_block/vbdev_zone_block.o 00:20:14.017 CC module/bdev/aio/bdev_aio.o 00:20:14.017 CC module/bdev/aio/bdev_aio_rpc.o 00:20:14.017 CC module/bdev/ftl/bdev_ftl.o 00:20:14.017 CC module/bdev/ftl/bdev_ftl_rpc.o 00:20:14.017 CC module/bdev/virtio/bdev_virtio_scsi.o 00:20:14.017 CC module/bdev/virtio/bdev_virtio_blk.o 00:20:14.017 CC module/bdev/virtio/bdev_virtio_rpc.o 00:20:14.017 CC module/bdev/raid/bdev_raid_rpc.o 00:20:14.017 CC module/bdev/raid/bdev_raid_sb.o 00:20:14.017 CC module/bdev/raid/bdev_raid.o 00:20:14.017 CC module/bdev/lvol/vbdev_lvol.o 00:20:14.017 CC module/bdev/raid/raid0.o 00:20:14.017 CC module/bdev/raid/raid1.o 00:20:14.017 CC module/bdev/raid/concat.o 00:20:14.017 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:20:14.274 LIB libspdk_blobfs_bdev.a 00:20:14.274 LIB libspdk_bdev_split.a 00:20:14.274 LIB libspdk_bdev_null.a 00:20:14.274 LIB libspdk_bdev_gpt.a 00:20:14.274 LIB libspdk_bdev_passthru.a 00:20:14.274 LIB libspdk_bdev_malloc.a 00:20:14.274 LIB libspdk_bdev_iscsi.a 00:20:14.274 LIB libspdk_bdev_delay.a 00:20:14.274 LIB libspdk_bdev_aio.a 00:20:14.534 LIB libspdk_bdev_error.a 00:20:14.534 LIB libspdk_bdev_ftl.a 00:20:14.534 LIB libspdk_bdev_zone_block.a 00:20:14.534 LIB libspdk_bdev_lvol.a 00:20:14.534 LIB libspdk_bdev_virtio.a 00:20:14.794 LIB libspdk_bdev_raid.a 00:20:15.362 LIB libspdk_bdev_nvme.a 00:20:15.930 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:20:15.930 CC module/event/subsystems/keyring/keyring.o 00:20:15.930 CC module/event/subsystems/scheduler/scheduler.o 00:20:15.930 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:20:15.930 CC module/event/subsystems/sock/sock.o 00:20:15.930 CC module/event/subsystems/vmd/vmd.o 00:20:15.930 CC module/event/subsystems/vmd/vmd_rpc.o 00:20:15.930 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:20:15.930 CC module/event/subsystems/iobuf/iobuf.o 00:20:16.190 LIB libspdk_event_keyring.a 00:20:16.190 LIB libspdk_event_vhost_blk.a 00:20:16.190 LIB libspdk_event_scheduler.a 00:20:16.190 LIB libspdk_event_sock.a 00:20:16.190 LIB libspdk_event_vfu_tgt.a 00:20:16.190 LIB libspdk_event_vmd.a 00:20:16.190 LIB libspdk_event_iobuf.a 00:20:16.449 CC module/event/subsystems/accel/accel.o 00:20:16.708 LIB libspdk_event_accel.a 00:20:16.967 CC module/event/subsystems/bdev/bdev.o 00:20:16.967 LIB libspdk_event_bdev.a 00:20:17.225 CC module/event/subsystems/scsi/scsi.o 00:20:17.484 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:20:17.484 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:20:17.484 CC module/event/subsystems/ublk/ublk.o 00:20:17.484 CC module/event/subsystems/nbd/nbd.o 00:20:17.484 LIB libspdk_event_scsi.a 00:20:17.484 LIB libspdk_event_ublk.a 00:20:17.484 LIB libspdk_event_nbd.a 00:20:17.484 LIB libspdk_event_nvmf.a 00:20:17.742 CC module/event/subsystems/iscsi/iscsi.o 00:20:17.742 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:20:18.002 LIB libspdk_event_iscsi.a 00:20:18.002 LIB libspdk_event_vhost_scsi.a 00:20:18.268 CC test/rpc_client/rpc_client_test.o 00:20:18.268 CXX app/trace/trace.o 00:20:18.268 TEST_HEADER include/spdk/accel.h 00:20:18.268 TEST_HEADER include/spdk/accel_module.h 00:20:18.268 TEST_HEADER include/spdk/assert.h 00:20:18.268 TEST_HEADER include/spdk/barrier.h 00:20:18.268 TEST_HEADER include/spdk/bdev.h 00:20:18.268 TEST_HEADER include/spdk/base64.h 00:20:18.268 TEST_HEADER include/spdk/bdev_zone.h 00:20:18.268 TEST_HEADER include/spdk/bit_array.h 00:20:18.268 TEST_HEADER include/spdk/bdev_module.h 00:20:18.268 TEST_HEADER include/spdk/bit_pool.h 00:20:18.268 TEST_HEADER include/spdk/blob_bdev.h 00:20:18.268 CC app/spdk_lspci/spdk_lspci.o 00:20:18.268 TEST_HEADER include/spdk/blobfs.h 00:20:18.268 TEST_HEADER include/spdk/blobfs_bdev.h 00:20:18.268 TEST_HEADER include/spdk/blob.h 00:20:18.268 TEST_HEADER include/spdk/conf.h 00:20:18.268 TEST_HEADER include/spdk/config.h 00:20:18.268 TEST_HEADER include/spdk/cpuset.h 00:20:18.268 TEST_HEADER include/spdk/crc16.h 00:20:18.268 TEST_HEADER include/spdk/crc32.h 00:20:18.268 CC app/spdk_top/spdk_top.o 00:20:18.268 TEST_HEADER include/spdk/crc64.h 00:20:18.268 CC app/spdk_nvme_perf/perf.o 00:20:18.268 CC app/spdk_nvme_discover/discovery_aer.o 00:20:18.268 CC app/trace_record/trace_record.o 00:20:18.268 TEST_HEADER include/spdk/dif.h 00:20:18.268 CC app/spdk_nvme_identify/identify.o 00:20:18.268 TEST_HEADER include/spdk/dma.h 00:20:18.268 TEST_HEADER include/spdk/endian.h 00:20:18.269 TEST_HEADER include/spdk/env_dpdk.h 00:20:18.269 TEST_HEADER include/spdk/env.h 00:20:18.269 TEST_HEADER include/spdk/event.h 00:20:18.269 TEST_HEADER include/spdk/fd_group.h 00:20:18.269 TEST_HEADER include/spdk/fd.h 00:20:18.269 TEST_HEADER include/spdk/file.h 00:20:18.269 TEST_HEADER include/spdk/ftl.h 00:20:18.269 TEST_HEADER include/spdk/gpt_spec.h 00:20:18.269 TEST_HEADER include/spdk/hexlify.h 00:20:18.269 TEST_HEADER include/spdk/histogram_data.h 00:20:18.269 CC app/iscsi_tgt/iscsi_tgt.o 00:20:18.269 TEST_HEADER include/spdk/idxd.h 00:20:18.269 TEST_HEADER include/spdk/idxd_spec.h 00:20:18.269 CC examples/interrupt_tgt/interrupt_tgt.o 00:20:18.269 TEST_HEADER include/spdk/init.h 00:20:18.269 TEST_HEADER include/spdk/ioat.h 00:20:18.269 CC app/spdk_dd/spdk_dd.o 00:20:18.269 TEST_HEADER include/spdk/ioat_spec.h 00:20:18.269 CC app/nvmf_tgt/nvmf_main.o 00:20:18.269 TEST_HEADER include/spdk/iscsi_spec.h 00:20:18.269 CC app/vhost/vhost.o 00:20:18.269 TEST_HEADER include/spdk/json.h 00:20:18.269 TEST_HEADER include/spdk/jsonrpc.h 00:20:18.269 TEST_HEADER include/spdk/keyring.h 00:20:18.269 TEST_HEADER include/spdk/keyring_module.h 00:20:18.269 CC examples/vmd/lsvmd/lsvmd.o 00:20:18.269 CC test/thread/lock/spdk_lock.o 00:20:18.269 TEST_HEADER include/spdk/likely.h 00:20:18.269 CC test/env/vtophys/vtophys.o 00:20:18.269 TEST_HEADER include/spdk/log.h 00:20:18.269 CC test/app/histogram_perf/histogram_perf.o 00:20:18.269 CC test/app/jsoncat/jsoncat.o 00:20:18.269 CC examples/vmd/led/led.o 00:20:18.269 CC examples/sock/hello_world/hello_sock.o 00:20:18.269 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:20:18.269 TEST_HEADER include/spdk/lvol.h 00:20:18.269 CC test/thread/poller_perf/poller_perf.o 00:20:18.269 CC test/event/event_perf/event_perf.o 00:20:18.269 CC test/nvme/err_injection/err_injection.o 00:20:18.269 TEST_HEADER include/spdk/memory.h 00:20:18.269 CC test/env/pci/pci_ut.o 00:20:18.269 CC examples/ioat/verify/verify.o 00:20:18.269 CC test/nvme/reset/reset.o 00:20:18.269 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:20:18.269 TEST_HEADER include/spdk/mmio.h 00:20:18.269 CC examples/ioat/perf/perf.o 00:20:18.269 CC examples/nvme/nvme_manage/nvme_manage.o 00:20:18.269 CC examples/nvme/hello_world/hello_world.o 00:20:18.269 CC test/nvme/sgl/sgl.o 00:20:18.269 CC app/spdk_tgt/spdk_tgt.o 00:20:18.269 CC test/event/reactor_perf/reactor_perf.o 00:20:18.269 CC test/env/memory/memory_ut.o 00:20:18.269 TEST_HEADER include/spdk/nbd.h 00:20:18.269 CC examples/nvme/abort/abort.o 00:20:18.269 CC test/nvme/overhead/overhead.o 00:20:18.269 CC test/nvme/connect_stress/connect_stress.o 00:20:18.269 CC test/nvme/reserve/reserve.o 00:20:18.269 CC test/nvme/boot_partition/boot_partition.o 00:20:18.269 TEST_HEADER include/spdk/notify.h 00:20:18.269 CC test/app/stub/stub.o 00:20:18.269 CC examples/nvme/hotplug/hotplug.o 00:20:18.269 CC examples/nvme/reconnect/reconnect.o 00:20:18.269 CC examples/accel/perf/accel_perf.o 00:20:18.269 TEST_HEADER include/spdk/nvme.h 00:20:18.269 CC examples/nvme/arbitration/arbitration.o 00:20:18.269 CC app/fio/nvme/fio_plugin.o 00:20:18.269 CC examples/idxd/perf/perf.o 00:20:18.269 CC test/event/reactor/reactor.o 00:20:18.269 CC test/nvme/e2edp/nvme_dp.o 00:20:18.269 CC examples/util/zipf/zipf.o 00:20:18.269 CC examples/nvme/cmb_copy/cmb_copy.o 00:20:18.269 CC test/nvme/startup/startup.o 00:20:18.532 TEST_HEADER include/spdk/nvme_intel.h 00:20:18.532 CC test/nvme/simple_copy/simple_copy.o 00:20:18.532 CC test/event/app_repeat/app_repeat.o 00:20:18.532 CC test/nvme/aer/aer.o 00:20:18.532 TEST_HEADER include/spdk/nvme_ocssd.h 00:20:18.532 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:20:18.532 TEST_HEADER include/spdk/nvme_spec.h 00:20:18.532 CC test/nvme/compliance/nvme_compliance.o 00:20:18.532 TEST_HEADER include/spdk/nvme_zns.h 00:20:18.532 TEST_HEADER include/spdk/nvmf_cmd.h 00:20:18.532 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:20:18.532 TEST_HEADER include/spdk/nvmf.h 00:20:18.532 TEST_HEADER include/spdk/nvmf_spec.h 00:20:18.532 CC test/dma/test_dma/test_dma.o 00:20:18.532 CC test/accel/dif/dif.o 00:20:18.532 TEST_HEADER include/spdk/nvmf_transport.h 00:20:18.532 CC examples/blob/cli/blobcli.o 00:20:18.532 TEST_HEADER include/spdk/opal.h 00:20:18.532 CC examples/blob/hello_world/hello_blob.o 00:20:18.532 TEST_HEADER include/spdk/opal_spec.h 00:20:18.532 TEST_HEADER include/spdk/pci_ids.h 00:20:18.532 CC test/blobfs/mkfs/mkfs.o 00:20:18.532 TEST_HEADER include/spdk/pipe.h 00:20:18.532 TEST_HEADER include/spdk/queue.h 00:20:18.532 CC examples/nvmf/nvmf/nvmf.o 00:20:18.532 TEST_HEADER include/spdk/reduce.h 00:20:18.532 CC examples/bdev/bdevperf/bdevperf.o 00:20:18.532 TEST_HEADER include/spdk/rpc.h 00:20:18.532 LINK spdk_lspci 00:20:18.532 CC test/bdev/bdevio/bdevio.o 00:20:18.532 TEST_HEADER include/spdk/scheduler.h 00:20:18.532 CC test/event/scheduler/scheduler.o 00:20:18.532 TEST_HEADER include/spdk/scsi.h 00:20:18.532 CC examples/bdev/hello_world/hello_bdev.o 00:20:18.532 TEST_HEADER include/spdk/scsi_spec.h 00:20:18.532 CC test/app/bdev_svc/bdev_svc.o 00:20:18.532 TEST_HEADER include/spdk/sock.h 00:20:18.532 CC examples/thread/thread/thread_ex.o 00:20:18.532 TEST_HEADER include/spdk/stdinc.h 00:20:18.532 TEST_HEADER include/spdk/string.h 00:20:18.532 TEST_HEADER include/spdk/thread.h 00:20:18.532 TEST_HEADER include/spdk/trace.h 00:20:18.532 LINK rpc_client_test 00:20:18.532 TEST_HEADER include/spdk/trace_parser.h 00:20:18.532 TEST_HEADER include/spdk/tree.h 00:20:18.532 TEST_HEADER include/spdk/ublk.h 00:20:18.532 TEST_HEADER include/spdk/util.h 00:20:18.532 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:20:18.532 CC test/lvol/esnap/esnap.o 00:20:18.532 CC test/env/mem_callbacks/mem_callbacks.o 00:20:18.532 TEST_HEADER include/spdk/uuid.h 00:20:18.532 TEST_HEADER include/spdk/version.h 00:20:18.532 TEST_HEADER include/spdk/vfio_user_pci.h 00:20:18.532 TEST_HEADER include/spdk/vfio_user_spec.h 00:20:18.532 TEST_HEADER include/spdk/vhost.h 00:20:18.532 TEST_HEADER include/spdk/vmd.h 00:20:18.532 TEST_HEADER include/spdk/xor.h 00:20:18.532 TEST_HEADER include/spdk/zipf.h 00:20:18.532 CXX test/cpp_headers/accel.o 00:20:18.532 LINK spdk_nvme_discover 00:20:18.532 LINK lsvmd 00:20:18.532 LINK jsoncat 00:20:18.532 LINK led 00:20:18.532 LINK poller_perf 00:20:18.532 LINK reactor_perf 00:20:18.532 LINK event_perf 00:20:18.532 LINK vtophys 00:20:18.532 LINK histogram_perf 00:20:18.532 LINK spdk_trace_record 00:20:18.532 LINK env_dpdk_post_init 00:20:18.532 LINK reactor 00:20:18.532 LINK nvmf_tgt 00:20:18.532 LINK interrupt_tgt 00:20:18.532 LINK zipf 00:20:18.532 LINK iscsi_tgt 00:20:18.532 LINK app_repeat 00:20:18.532 LINK vhost 00:20:18.532 LINK boot_partition 00:20:18.532 LINK err_injection 00:20:18.532 LINK pmr_persistence 00:20:18.532 LINK connect_stress 00:20:18.532 LINK stub 00:20:18.532 LINK startup 00:20:18.532 fio_plugin.c:1559:29: warning: field 'ruhs' with variable sized type 'struct spdk_nvme_fdp_ruhs' not at the end of a struct or class is a GNU extension [-Wgnu-variable-sized-type-not-at-end] 00:20:18.532 struct spdk_nvme_fdp_ruhs ruhs; 00:20:18.532 ^ 00:20:18.802 LINK reserve 00:20:18.802 LINK hello_world 00:20:18.802 LINK spdk_tgt 00:20:18.802 LINK hello_sock 00:20:18.802 LINK verify 00:20:18.802 LINK ioat_perf 00:20:18.802 LINK cmb_copy 00:20:18.802 LINK hotplug 00:20:18.802 LINK simple_copy 00:20:18.802 LINK reset 00:20:18.802 LINK mkfs 00:20:18.802 LINK bdev_svc 00:20:18.802 LINK hello_blob 00:20:18.802 LINK aer 00:20:18.802 LINK sgl 00:20:18.802 CXX test/cpp_headers/accel_module.o 00:20:18.802 LINK overhead 00:20:18.802 LINK scheduler 00:20:18.802 LINK nvme_dp 00:20:18.802 LINK spdk_trace 00:20:18.802 LINK thread 00:20:18.802 LINK hello_bdev 00:20:18.802 LINK idxd_perf 00:20:18.802 LINK nvmf 00:20:18.802 LINK reconnect 00:20:18.802 LINK abort 00:20:18.802 LINK arbitration 00:20:19.062 LINK test_dma 00:20:19.062 LINK spdk_dd 00:20:19.062 LINK pci_ut 00:20:19.062 LINK nvme_manage 00:20:19.062 LINK bdevio 00:20:19.062 LINK nvme_compliance 00:20:19.062 CXX test/cpp_headers/assert.o 00:20:19.062 LINK nvme_fuzz 00:20:19.062 LINK accel_perf 00:20:19.062 LINK dif 00:20:19.062 1 warning generated. 00:20:19.062 LINK blobcli 00:20:19.062 LINK spdk_nvme 00:20:19.320 CXX test/cpp_headers/barrier.o 00:20:19.320 LINK mem_callbacks 00:20:19.320 LINK spdk_nvme_identify 00:20:19.320 CXX test/cpp_headers/base64.o 00:20:19.320 CC test/nvme/doorbell_aers/doorbell_aers.o 00:20:19.320 CC test/nvme/fused_ordering/fused_ordering.o 00:20:19.320 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:20:19.587 CXX test/cpp_headers/bdev.o 00:20:19.587 CXX test/cpp_headers/bdev_module.o 00:20:19.587 LINK spdk_nvme_perf 00:20:19.587 CXX test/cpp_headers/bdev_zone.o 00:20:19.587 CXX test/cpp_headers/bit_array.o 00:20:19.587 CC test/nvme/cuse/cuse.o 00:20:19.587 LINK bdevperf 00:20:19.587 CC test/nvme/fdp/fdp.o 00:20:19.587 LINK spdk_top 00:20:19.587 CC app/fio/bdev/fio_plugin.o 00:20:19.587 CXX test/cpp_headers/bit_pool.o 00:20:19.587 CXX test/cpp_headers/blob_bdev.o 00:20:19.587 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:20:19.587 CC test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.o 00:20:19.587 CXX test/cpp_headers/blobfs_bdev.o 00:20:19.587 LINK doorbell_aers 00:20:19.587 CC test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.o 00:20:19.587 CXX test/cpp_headers/blobfs.o 00:20:19.587 LINK fused_ordering 00:20:19.845 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:20:19.845 CXX test/cpp_headers/blob.o 00:20:19.845 CXX test/cpp_headers/conf.o 00:20:19.845 CXX test/cpp_headers/config.o 00:20:19.845 CXX test/cpp_headers/cpuset.o 00:20:19.845 CXX test/cpp_headers/crc16.o 00:20:19.845 CXX test/cpp_headers/crc32.o 00:20:19.845 CXX test/cpp_headers/crc64.o 00:20:19.845 CXX test/cpp_headers/dma.o 00:20:19.845 CXX test/cpp_headers/dif.o 00:20:19.845 LINK memory_ut 00:20:19.845 CXX test/cpp_headers/endian.o 00:20:19.845 LINK fdp 00:20:19.845 CXX test/cpp_headers/env_dpdk.o 00:20:19.845 CXX test/cpp_headers/env.o 00:20:19.845 CXX test/cpp_headers/event.o 00:20:20.108 CXX test/cpp_headers/fd_group.o 00:20:20.108 CXX test/cpp_headers/fd.o 00:20:20.108 CXX test/cpp_headers/file.o 00:20:20.108 CXX test/cpp_headers/ftl.o 00:20:20.108 CXX test/cpp_headers/gpt_spec.o 00:20:20.108 CXX test/cpp_headers/hexlify.o 00:20:20.108 CXX test/cpp_headers/histogram_data.o 00:20:20.108 CXX test/cpp_headers/idxd.o 00:20:20.108 CXX test/cpp_headers/idxd_spec.o 00:20:20.108 CXX test/cpp_headers/init.o 00:20:20.108 CXX test/cpp_headers/ioat.o 00:20:20.108 LINK llvm_vfio_fuzz 00:20:20.108 CXX test/cpp_headers/ioat_spec.o 00:20:20.108 CXX test/cpp_headers/iscsi_spec.o 00:20:20.109 CXX test/cpp_headers/json.o 00:20:20.109 CXX test/cpp_headers/jsonrpc.o 00:20:20.109 CXX test/cpp_headers/keyring.o 00:20:20.109 CXX test/cpp_headers/keyring_module.o 00:20:20.109 CXX test/cpp_headers/likely.o 00:20:20.109 CXX test/cpp_headers/log.o 00:20:20.109 CXX test/cpp_headers/lvol.o 00:20:20.109 CXX test/cpp_headers/memory.o 00:20:20.109 CXX test/cpp_headers/mmio.o 00:20:20.109 CXX test/cpp_headers/nbd.o 00:20:20.109 CXX test/cpp_headers/notify.o 00:20:20.109 CXX test/cpp_headers/nvme.o 00:20:20.109 CXX test/cpp_headers/nvme_intel.o 00:20:20.109 CXX test/cpp_headers/nvme_ocssd.o 00:20:20.109 CXX test/cpp_headers/nvme_ocssd_spec.o 00:20:20.109 CXX test/cpp_headers/nvme_spec.o 00:20:20.109 CXX test/cpp_headers/nvme_zns.o 00:20:20.109 CXX test/cpp_headers/nvmf_cmd.o 00:20:20.109 CXX test/cpp_headers/nvmf_fc_spec.o 00:20:20.109 CXX test/cpp_headers/nvmf.o 00:20:20.109 CXX test/cpp_headers/nvmf_spec.o 00:20:20.109 CXX test/cpp_headers/nvmf_transport.o 00:20:20.368 CXX test/cpp_headers/opal.o 00:20:20.368 CXX test/cpp_headers/opal_spec.o 00:20:20.368 CXX test/cpp_headers/pci_ids.o 00:20:20.368 LINK spdk_bdev 00:20:20.368 CXX test/cpp_headers/pipe.o 00:20:20.368 CXX test/cpp_headers/queue.o 00:20:20.368 CXX test/cpp_headers/reduce.o 00:20:20.368 CXX test/cpp_headers/rpc.o 00:20:20.368 CXX test/cpp_headers/scheduler.o 00:20:20.368 CXX test/cpp_headers/scsi.o 00:20:20.368 CXX test/cpp_headers/scsi_spec.o 00:20:20.368 CXX test/cpp_headers/sock.o 00:20:20.368 CXX test/cpp_headers/stdinc.o 00:20:20.368 CXX test/cpp_headers/string.o 00:20:20.368 CXX test/cpp_headers/thread.o 00:20:20.368 LINK vhost_fuzz 00:20:20.368 CXX test/cpp_headers/trace.o 00:20:20.368 CXX test/cpp_headers/trace_parser.o 00:20:20.368 CXX test/cpp_headers/tree.o 00:20:20.368 CXX test/cpp_headers/ublk.o 00:20:20.368 CXX test/cpp_headers/util.o 00:20:20.368 CXX test/cpp_headers/uuid.o 00:20:20.368 CXX test/cpp_headers/version.o 00:20:20.368 CXX test/cpp_headers/vfio_user_pci.o 00:20:20.368 CXX test/cpp_headers/vfio_user_spec.o 00:20:20.368 CXX test/cpp_headers/vhost.o 00:20:20.368 CXX test/cpp_headers/vmd.o 00:20:20.368 CXX test/cpp_headers/xor.o 00:20:20.368 CXX test/cpp_headers/zipf.o 00:20:20.368 LINK spdk_lock 00:20:20.626 LINK llvm_nvme_fuzz 00:20:20.884 LINK cuse 00:20:21.142 LINK iscsi_fuzz 00:20:23.045 LINK esnap 00:20:23.611 00:20:23.611 real 0m44.870s 00:20:23.611 user 6m59.407s 00:20:23.611 sys 2m37.294s 00:20:23.611 11:31:07 make -- common/autotest_common.sh@1125 -- $ xtrace_disable 00:20:23.611 11:31:07 make -- common/autotest_common.sh@10 -- $ set +x 00:20:23.611 ************************************ 00:20:23.611 END TEST make 00:20:23.611 ************************************ 00:20:23.611 11:31:07 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:20:23.611 11:31:07 -- pm/common@29 -- $ signal_monitor_resources TERM 00:20:23.611 11:31:07 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:20:23.611 11:31:07 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:20:23.611 11:31:07 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:20:23.611 11:31:07 -- pm/common@44 -- $ pid=3209687 00:20:23.611 11:31:07 -- pm/common@50 -- $ kill -TERM 3209687 00:20:23.611 11:31:07 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:20:23.611 11:31:07 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:20:23.611 11:31:07 -- pm/common@44 -- $ pid=3209688 00:20:23.611 11:31:07 -- pm/common@50 -- $ kill -TERM 3209688 00:20:23.611 11:31:07 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:20:23.611 11:31:07 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:20:23.611 11:31:07 -- pm/common@44 -- $ pid=3209690 00:20:23.611 11:31:07 -- pm/common@50 -- $ kill -TERM 3209690 00:20:23.611 11:31:07 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:20:23.611 11:31:07 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:20:23.611 11:31:07 -- pm/common@44 -- $ pid=3209714 00:20:23.611 11:31:07 -- pm/common@50 -- $ sudo -E kill -TERM 3209714 00:20:23.611 11:31:07 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh 00:20:23.611 11:31:07 -- nvmf/common.sh@7 -- # uname -s 00:20:23.611 11:31:07 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:23.611 11:31:07 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:23.611 11:31:07 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:23.611 11:31:07 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:23.611 11:31:07 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:23.611 11:31:07 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:23.611 11:31:07 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:23.611 11:31:07 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:23.611 11:31:07 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:23.611 11:31:07 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:23.611 11:31:07 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80e69187-586b-e711-906e-0017a4403562 00:20:23.611 11:31:07 -- nvmf/common.sh@18 -- # NVME_HOSTID=80e69187-586b-e711-906e-0017a4403562 00:20:23.611 11:31:07 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:23.611 11:31:07 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:23.611 11:31:07 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:20:23.611 11:31:07 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:23.611 11:31:07 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:20:23.611 11:31:07 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:23.611 11:31:07 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:23.611 11:31:07 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:23.611 11:31:07 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:23.612 11:31:07 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:23.612 11:31:07 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:23.612 11:31:07 -- paths/export.sh@5 -- # export PATH 00:20:23.612 11:31:07 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:23.612 11:31:07 -- nvmf/common.sh@47 -- # : 0 00:20:23.612 11:31:07 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:23.612 11:31:07 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:23.612 11:31:07 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:23.612 11:31:07 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:23.612 11:31:07 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:23.612 11:31:07 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:23.612 11:31:07 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:23.612 11:31:07 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:23.612 11:31:07 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:20:23.612 11:31:07 -- spdk/autotest.sh@32 -- # uname -s 00:20:23.612 11:31:07 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:20:23.612 11:31:07 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:20:23.612 11:31:07 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/coredumps 00:20:23.612 11:31:07 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:20:23.612 11:31:07 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/coredumps 00:20:23.612 11:31:07 -- spdk/autotest.sh@44 -- # modprobe nbd 00:20:23.612 11:31:07 -- spdk/autotest.sh@46 -- # type -P udevadm 00:20:23.612 11:31:07 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:20:23.612 11:31:07 -- spdk/autotest.sh@48 -- # udevadm_pid=3267573 00:20:23.612 11:31:07 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:20:23.612 11:31:07 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:20:23.612 11:31:07 -- pm/common@17 -- # local monitor 00:20:23.612 11:31:07 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:20:23.612 11:31:07 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:20:23.612 11:31:07 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:20:23.612 11:31:07 -- pm/common@21 -- # date +%s 00:20:23.612 11:31:07 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:20:23.612 11:31:07 -- pm/common@21 -- # date +%s 00:20:23.612 11:31:07 -- pm/common@25 -- # sleep 1 00:20:23.612 11:31:07 -- pm/common@21 -- # date +%s 00:20:23.612 11:31:07 -- pm/common@21 -- # date +%s 00:20:23.612 11:31:07 -- pm/common@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1718011867 00:20:23.612 11:31:07 -- pm/common@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1718011867 00:20:23.612 11:31:07 -- pm/common@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1718011867 00:20:23.612 11:31:07 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1718011867 00:20:23.868 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1718011867_collect-vmstat.pm.log 00:20:23.868 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1718011867_collect-cpu-load.pm.log 00:20:23.868 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1718011867_collect-cpu-temp.pm.log 00:20:23.868 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1718011867_collect-bmc-pm.bmc.pm.log 00:20:24.798 11:31:08 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:20:24.798 11:31:08 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:20:24.798 11:31:08 -- common/autotest_common.sh@723 -- # xtrace_disable 00:20:24.798 11:31:08 -- common/autotest_common.sh@10 -- # set +x 00:20:24.798 11:31:08 -- spdk/autotest.sh@59 -- # create_test_list 00:20:24.798 11:31:08 -- common/autotest_common.sh@747 -- # xtrace_disable 00:20:24.798 11:31:08 -- common/autotest_common.sh@10 -- # set +x 00:20:24.798 11:31:08 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/autotest.sh 00:20:24.798 11:31:08 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:20:24.798 11:31:08 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:20:24.798 11:31:08 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output 00:20:24.798 11:31:08 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:20:24.798 11:31:08 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:20:24.798 11:31:08 -- common/autotest_common.sh@1454 -- # uname 00:20:24.798 11:31:08 -- common/autotest_common.sh@1454 -- # '[' Linux = FreeBSD ']' 00:20:24.799 11:31:08 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:20:24.799 11:31:08 -- common/autotest_common.sh@1474 -- # uname 00:20:24.799 11:31:08 -- common/autotest_common.sh@1474 -- # [[ Linux = FreeBSD ]] 00:20:24.799 11:31:08 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:20:24.799 11:31:08 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=clang 00:20:24.799 11:31:08 -- spdk/autotest.sh@72 -- # hash lcov 00:20:24.799 11:31:08 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=clang == *\c\l\a\n\g* ]] 00:20:24.799 11:31:08 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:20:24.799 11:31:08 -- common/autotest_common.sh@723 -- # xtrace_disable 00:20:24.799 11:31:08 -- common/autotest_common.sh@10 -- # set +x 00:20:24.799 11:31:08 -- spdk/autotest.sh@91 -- # rm -f 00:20:24.799 11:31:08 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:20:28.080 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:20:28.080 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:20:28.080 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:20:28.080 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:20:28.080 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:20:28.080 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:20:28.080 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:20:28.080 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:20:28.080 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:20:28.338 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:20:28.338 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:20:28.338 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:20:28.338 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:20:28.338 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:20:28.338 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:20:28.338 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:20:28.338 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:20:28.338 11:31:12 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:20:28.338 11:31:12 -- common/autotest_common.sh@1668 -- # zoned_devs=() 00:20:28.338 11:31:12 -- common/autotest_common.sh@1668 -- # local -gA zoned_devs 00:20:28.338 11:31:12 -- common/autotest_common.sh@1669 -- # local nvme bdf 00:20:28.338 11:31:12 -- common/autotest_common.sh@1671 -- # for nvme in /sys/block/nvme* 00:20:28.338 11:31:12 -- common/autotest_common.sh@1672 -- # is_block_zoned nvme0n1 00:20:28.338 11:31:12 -- common/autotest_common.sh@1661 -- # local device=nvme0n1 00:20:28.338 11:31:12 -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:20:28.338 11:31:12 -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:20:28.338 11:31:12 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:20:28.338 11:31:12 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:20:28.338 11:31:12 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:20:28.338 11:31:12 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:20:28.338 11:31:12 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:20:28.338 11:31:12 -- scripts/common.sh@387 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:20:28.596 No valid GPT data, bailing 00:20:28.596 11:31:12 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:20:28.596 11:31:12 -- scripts/common.sh@391 -- # pt= 00:20:28.596 11:31:12 -- scripts/common.sh@392 -- # return 1 00:20:28.596 11:31:12 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:20:28.596 1+0 records in 00:20:28.596 1+0 records out 00:20:28.596 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00796436 s, 132 MB/s 00:20:28.596 11:31:12 -- spdk/autotest.sh@118 -- # sync 00:20:28.596 11:31:12 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:20:28.596 11:31:12 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:20:28.596 11:31:12 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:20:33.924 11:31:17 -- spdk/autotest.sh@124 -- # uname -s 00:20:33.924 11:31:17 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:20:33.924 11:31:17 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/test-setup.sh 00:20:33.924 11:31:17 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:20:33.924 11:31:17 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:20:33.924 11:31:17 -- common/autotest_common.sh@10 -- # set +x 00:20:33.924 ************************************ 00:20:33.924 START TEST setup.sh 00:20:33.924 ************************************ 00:20:33.924 11:31:17 setup.sh -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/test-setup.sh 00:20:33.924 * Looking for test storage... 00:20:33.924 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:20:33.924 11:31:17 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:20:33.924 11:31:17 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:20:33.924 11:31:17 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/acl.sh 00:20:33.924 11:31:17 setup.sh -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:20:33.924 11:31:17 setup.sh -- common/autotest_common.sh@1106 -- # xtrace_disable 00:20:33.924 11:31:17 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:20:33.924 ************************************ 00:20:33.924 START TEST acl 00:20:33.924 ************************************ 00:20:33.924 11:31:17 setup.sh.acl -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/acl.sh 00:20:33.924 * Looking for test storage... 00:20:33.924 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:20:33.924 11:31:17 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:20:33.924 11:31:17 setup.sh.acl -- common/autotest_common.sh@1668 -- # zoned_devs=() 00:20:33.924 11:31:17 setup.sh.acl -- common/autotest_common.sh@1668 -- # local -gA zoned_devs 00:20:33.924 11:31:17 setup.sh.acl -- common/autotest_common.sh@1669 -- # local nvme bdf 00:20:33.924 11:31:17 setup.sh.acl -- common/autotest_common.sh@1671 -- # for nvme in /sys/block/nvme* 00:20:33.924 11:31:17 setup.sh.acl -- common/autotest_common.sh@1672 -- # is_block_zoned nvme0n1 00:20:33.924 11:31:17 setup.sh.acl -- common/autotest_common.sh@1661 -- # local device=nvme0n1 00:20:33.924 11:31:17 setup.sh.acl -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:20:33.924 11:31:17 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:20:33.924 11:31:17 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:20:33.924 11:31:17 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:20:33.924 11:31:17 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:20:33.924 11:31:17 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:20:33.924 11:31:17 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:20:33.924 11:31:17 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:20:33.924 11:31:17 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:20:37.215 11:31:21 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:20:37.215 11:31:21 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:20:37.215 11:31:21 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:20:37.215 11:31:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:20:37.215 11:31:21 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:20:37.215 11:31:21 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh status 00:20:40.503 Hugepages 00:20:40.503 node hugesize free / total 00:20:40.503 11:31:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:20:40.503 11:31:24 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:20:40.503 11:31:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:20:40.503 11:31:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:20:40.503 11:31:24 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:20:40.503 11:31:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:20:40.503 11:31:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:20:40.503 11:31:24 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:20:40.503 11:31:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:20:40.503 00:20:40.503 Type BDF Vendor Device NUMA Driver Device Block devices 00:20:40.503 11:31:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:20:40.503 11:31:24 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:20:40.503 11:31:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:20:40.503 11:31:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:20:40.503 11:31:24 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:20:40.503 11:31:24 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:20:40.503 11:31:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:20:40.503 11:31:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:20:40.503 11:31:24 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:20:40.503 11:31:24 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:20:40.503 11:31:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:20:40.503 11:31:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:20:40.503 11:31:24 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:20:40.503 11:31:24 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:20:40.503 11:31:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:20:40.503 11:31:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:20:40.503 11:31:24 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:20:40.503 11:31:24 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:20:40.503 11:31:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:20:40.503 11:31:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:20:40.503 11:31:24 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:20:40.503 11:31:24 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:20:40.503 11:31:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:20:40.503 11:31:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:20:40.503 11:31:24 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:20:40.503 11:31:24 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:20:40.503 11:31:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:20:40.503 11:31:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:20:40.503 11:31:24 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:20:40.503 11:31:24 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:20:40.503 11:31:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:20:40.503 11:31:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:20:40.503 11:31:24 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:20:40.503 11:31:24 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:20:40.503 11:31:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:20:40.503 11:31:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:5e:00.0 == *:*:*.* ]] 00:20:40.503 11:31:24 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:20:40.503 11:31:24 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\5\e\:\0\0\.\0* ]] 00:20:40.503 11:31:24 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:20:40.503 11:31:24 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:20:40.503 11:31:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:20:40.503 11:31:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:20:40.503 11:31:24 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:20:40.503 11:31:24 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:20:40.503 11:31:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:20:40.503 11:31:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:20:40.503 11:31:24 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:20:40.503 11:31:24 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:20:40.503 11:31:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:20:40.503 11:31:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:20:40.503 11:31:24 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:20:40.503 11:31:24 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:20:40.503 11:31:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:20:40.503 11:31:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:20:40.503 11:31:24 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:20:40.503 11:31:24 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:20:40.503 11:31:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:20:40.503 11:31:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:20:40.503 11:31:24 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:20:40.503 11:31:24 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:20:40.503 11:31:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:20:40.503 11:31:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:20:40.503 11:31:24 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:20:40.504 11:31:24 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:20:40.504 11:31:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:20:40.504 11:31:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:20:40.504 11:31:24 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:20:40.504 11:31:24 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:20:40.504 11:31:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:20:40.504 11:31:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:20:40.504 11:31:24 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:20:40.504 11:31:24 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:20:40.504 11:31:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:20:40.504 11:31:24 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:20:40.504 11:31:24 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:20:40.504 11:31:24 setup.sh.acl -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:20:40.504 11:31:24 setup.sh.acl -- common/autotest_common.sh@1106 -- # xtrace_disable 00:20:40.504 11:31:24 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:20:40.504 ************************************ 00:20:40.504 START TEST denied 00:20:40.504 ************************************ 00:20:40.504 11:31:24 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # denied 00:20:40.504 11:31:24 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:5e:00.0' 00:20:40.504 11:31:24 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:20:40.504 11:31:24 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:5e:00.0' 00:20:40.504 11:31:24 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:20:40.504 11:31:24 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:20:43.790 0000:5e:00.0 (8086 0a54): Skipping denied controller at 0000:5e:00.0 00:20:43.790 11:31:27 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:5e:00.0 00:20:43.790 11:31:27 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:20:43.790 11:31:27 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:20:43.790 11:31:27 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:5e:00.0 ]] 00:20:43.790 11:31:27 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:5e:00.0/driver 00:20:43.790 11:31:27 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:20:43.790 11:31:27 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:20:43.790 11:31:27 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:20:43.790 11:31:27 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:20:43.790 11:31:27 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:20:49.064 00:20:49.064 real 0m8.118s 00:20:49.064 user 0m2.454s 00:20:49.064 sys 0m4.936s 00:20:49.064 11:31:32 setup.sh.acl.denied -- common/autotest_common.sh@1125 -- # xtrace_disable 00:20:49.064 11:31:32 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:20:49.064 ************************************ 00:20:49.064 END TEST denied 00:20:49.064 ************************************ 00:20:49.064 11:31:32 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:20:49.064 11:31:32 setup.sh.acl -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:20:49.064 11:31:32 setup.sh.acl -- common/autotest_common.sh@1106 -- # xtrace_disable 00:20:49.064 11:31:32 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:20:49.064 ************************************ 00:20:49.064 START TEST allowed 00:20:49.064 ************************************ 00:20:49.064 11:31:32 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # allowed 00:20:49.064 11:31:32 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:5e:00.0 00:20:49.064 11:31:32 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:20:49.064 11:31:32 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:5e:00.0 .*: nvme -> .*' 00:20:49.064 11:31:32 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:20:49.064 11:31:32 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:20:57.189 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:20:57.189 11:31:41 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:20:57.189 11:31:41 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:20:57.189 11:31:41 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:20:57.189 11:31:41 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:20:57.189 11:31:41 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:21:01.372 00:21:01.372 real 0m12.241s 00:21:01.372 user 0m2.599s 00:21:01.372 sys 0m4.852s 00:21:01.372 11:31:44 setup.sh.acl.allowed -- common/autotest_common.sh@1125 -- # xtrace_disable 00:21:01.372 11:31:44 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:21:01.372 ************************************ 00:21:01.372 END TEST allowed 00:21:01.372 ************************************ 00:21:01.372 00:21:01.372 real 0m27.460s 00:21:01.372 user 0m7.507s 00:21:01.372 sys 0m14.635s 00:21:01.372 11:31:44 setup.sh.acl -- common/autotest_common.sh@1125 -- # xtrace_disable 00:21:01.372 11:31:44 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:21:01.372 ************************************ 00:21:01.372 END TEST acl 00:21:01.372 ************************************ 00:21:01.372 11:31:44 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/hugepages.sh 00:21:01.372 11:31:44 setup.sh -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:21:01.372 11:31:44 setup.sh -- common/autotest_common.sh@1106 -- # xtrace_disable 00:21:01.372 11:31:44 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:21:01.372 ************************************ 00:21:01.372 START TEST hugepages 00:21:01.372 ************************************ 00:21:01.372 11:31:44 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/hugepages.sh 00:21:01.372 * Looking for test storage... 00:21:01.372 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:21:01.372 11:31:45 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:21:01.372 11:31:45 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:21:01.372 11:31:45 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:21:01.372 11:31:45 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:21:01.372 11:31:45 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:21:01.372 11:31:45 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:21:01.372 11:31:45 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:21:01.372 11:31:45 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:21:01.372 11:31:45 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:21:01.372 11:31:45 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:21:01.372 11:31:45 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:21:01.372 11:31:45 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:21:01.372 11:31:45 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:21:01.372 11:31:45 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:21:01.372 11:31:45 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:21:01.372 11:31:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:21:01.372 11:31:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:21:01.372 11:31:45 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293388 kB' 'MemFree: 69426648 kB' 'MemAvailable: 73416632 kB' 'Buffers: 9244 kB' 'Cached: 15958428 kB' 'SwapCached: 0 kB' 'Active: 12870148 kB' 'Inactive: 3739908 kB' 'Active(anon): 12319816 kB' 'Inactive(anon): 0 kB' 'Active(file): 550332 kB' 'Inactive(file): 3739908 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 645700 kB' 'Mapped: 192948 kB' 'Shmem: 11677432 kB' 'KReclaimable: 535956 kB' 'Slab: 880388 kB' 'SReclaimable: 535956 kB' 'SUnreclaim: 344432 kB' 'KernelStack: 16480 kB' 'PageTables: 8808 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 52438144 kB' 'Committed_AS: 13723920 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 201624 kB' 'VmallocChunk: 0 kB' 'Percpu: 67520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 791884 kB' 'DirectMap2M: 31389696 kB' 'DirectMap1G: 69206016 kB' 00:21:01.372 11:31:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:21:01.372 11:31:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:21:01.372 11:31:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:21:01.372 11:31:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:21:01.372 11:31:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:21:01.372 11:31:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:21:01.372 11:31:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:21:01.372 11:31:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:21:01.372 11:31:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:21:01.372 11:31:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:21:01.372 11:31:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:21:01.372 11:31:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:21:01.372 11:31:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:21:01.372 11:31:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:21:01.372 11:31:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:21:01.372 11:31:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:21:01.372 11:31:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:21:01.372 11:31:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:21:01.372 11:31:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:21:01.372 11:31:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:21:01.372 11:31:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:21:01.372 11:31:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:21:01.372 11:31:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:21:01.372 11:31:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:21:01.372 11:31:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:21:01.372 11:31:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:21:01.372 11:31:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:21:01.372 11:31:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:21:01.372 11:31:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:21:01.372 11:31:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:21:01.372 11:31:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:21:01.372 11:31:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:21:01.372 11:31:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:21:01.372 11:31:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:21:01.372 11:31:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:21:01.372 11:31:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:21:01.372 11:31:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:21:01.372 11:31:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:21:01.372 11:31:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:21:01.372 11:31:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:21:01.372 11:31:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:21:01.372 11:31:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:21:01.372 11:31:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:21:01.372 11:31:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:21:01.372 11:31:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:21:01.372 11:31:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:21:01.372 11:31:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:21:01.372 11:31:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:21:01.372 11:31:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:21:01.372 11:31:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:21:01.372 11:31:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:21:01.372 11:31:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:21:01.372 11:31:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:21:01.372 11:31:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:21:01.372 11:31:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:21:01.372 11:31:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:21:01.372 11:31:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:21:01.372 11:31:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:21:01.372 11:31:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:21:01.372 11:31:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:21:01.372 11:31:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:21:01.372 11:31:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:21:01.372 11:31:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:21:01.372 11:31:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:21:01.372 11:31:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:21:01.372 11:31:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:21:01.372 11:31:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:21:01.372 11:31:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:21:01.373 11:31:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:21:01.373 11:31:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:21:01.373 11:31:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:21:01.373 11:31:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:21:01.373 11:31:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:21:01.373 11:31:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:21:01.373 11:31:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:21:01.373 11:31:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:21:01.373 11:31:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:21:01.373 11:31:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:21:01.373 11:31:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:21:01.373 11:31:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:21:01.373 11:31:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:21:01.373 11:31:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:21:01.373 11:31:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:21:01.373 11:31:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:21:01.373 11:31:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:21:01.373 11:31:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:21:01.373 11:31:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:21:01.373 11:31:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:21:01.373 11:31:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:21:01.373 11:31:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:21:01.373 11:31:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:21:01.373 11:31:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:21:01.373 11:31:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:21:01.373 11:31:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:21:01.373 11:31:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:21:01.373 11:31:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:21:01.373 11:31:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:21:01.373 11:31:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:21:01.373 11:31:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:21:01.373 11:31:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:21:01.373 11:31:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:21:01.373 11:31:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:21:01.373 11:31:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:21:01.373 11:31:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:21:01.373 11:31:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:21:01.373 11:31:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:21:01.373 11:31:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:21:01.373 11:31:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:21:01.373 11:31:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:21:01.373 11:31:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:21:01.373 11:31:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:21:01.373 11:31:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:21:01.373 11:31:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:21:01.373 11:31:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:21:01.373 11:31:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:21:01.373 11:31:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:21:01.373 11:31:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:21:01.373 11:31:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:21:01.373 11:31:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:21:01.373 11:31:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:21:01.373 11:31:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:21:01.373 11:31:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:21:01.373 11:31:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:21:01.373 11:31:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:21:01.373 11:31:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:21:01.373 11:31:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:21:01.373 11:31:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:21:01.373 11:31:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:21:01.373 11:31:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:21:01.373 11:31:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:21:01.373 11:31:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:21:01.373 11:31:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:21:01.373 11:31:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:21:01.373 11:31:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:21:01.373 11:31:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:21:01.373 11:31:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:21:01.373 11:31:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:21:01.373 11:31:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:21:01.373 11:31:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:21:01.373 11:31:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:21:01.373 11:31:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:21:01.373 11:31:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:21:01.373 11:31:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:21:01.373 11:31:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:21:01.373 11:31:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:21:01.373 11:31:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:21:01.373 11:31:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:21:01.373 11:31:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:21:01.373 11:31:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:21:01.373 11:31:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:21:01.373 11:31:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:21:01.373 11:31:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:21:01.373 11:31:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:21:01.373 11:31:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:21:01.373 11:31:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:21:01.373 11:31:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:21:01.373 11:31:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:21:01.373 11:31:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:21:01.373 11:31:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:21:01.373 11:31:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:21:01.373 11:31:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:21:01.373 11:31:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:21:01.373 11:31:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:21:01.373 11:31:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:21:01.373 11:31:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:21:01.373 11:31:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:21:01.373 11:31:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:21:01.373 11:31:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:21:01.373 11:31:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:21:01.373 11:31:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:21:01.373 11:31:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:21:01.373 11:31:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:21:01.373 11:31:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:21:01.373 11:31:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:21:01.373 11:31:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:21:01.373 11:31:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:21:01.373 11:31:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:21:01.373 11:31:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:21:01.373 11:31:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:21:01.373 11:31:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:21:01.373 11:31:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:21:01.373 11:31:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:21:01.373 11:31:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:21:01.373 11:31:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:21:01.373 11:31:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:21:01.373 11:31:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:21:01.373 11:31:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:21:01.373 11:31:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:21:01.373 11:31:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:21:01.373 11:31:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:21:01.373 11:31:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:21:01.373 11:31:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:21:01.373 11:31:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:21:01.373 11:31:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:21:01.373 11:31:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:21:01.373 11:31:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:21:01.373 11:31:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:21:01.374 11:31:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:21:01.374 11:31:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:21:01.374 11:31:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:21:01.374 11:31:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:21:01.374 11:31:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:21:01.374 11:31:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:21:01.374 11:31:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:21:01.374 11:31:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:21:01.374 11:31:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:21:01.374 11:31:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:21:01.374 11:31:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:21:01.374 11:31:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:21:01.374 11:31:45 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:21:01.374 11:31:45 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:21:01.374 11:31:45 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:21:01.374 11:31:45 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:21:01.374 11:31:45 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:21:01.374 11:31:45 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:21:01.374 11:31:45 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:21:01.374 11:31:45 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:21:01.374 11:31:45 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:21:01.374 11:31:45 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:21:01.374 11:31:45 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:21:01.374 11:31:45 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:21:01.374 11:31:45 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:21:01.374 11:31:45 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:21:01.374 11:31:45 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:21:01.374 11:31:45 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:21:01.374 11:31:45 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:21:01.374 11:31:45 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:21:01.374 11:31:45 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:21:01.374 11:31:45 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:21:01.374 11:31:45 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:21:01.374 11:31:45 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:21:01.374 11:31:45 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:21:01.374 11:31:45 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:21:01.374 11:31:45 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:21:01.374 11:31:45 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:21:01.374 11:31:45 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:21:01.374 11:31:45 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:21:01.374 11:31:45 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:21:01.374 11:31:45 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:21:01.374 11:31:45 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:21:01.374 11:31:45 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:21:01.374 11:31:45 setup.sh.hugepages -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:21:01.374 11:31:45 setup.sh.hugepages -- common/autotest_common.sh@1106 -- # xtrace_disable 00:21:01.374 11:31:45 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:21:01.374 ************************************ 00:21:01.374 START TEST default_setup 00:21:01.374 ************************************ 00:21:01.374 11:31:45 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # default_setup 00:21:01.374 11:31:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:21:01.374 11:31:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:21:01.374 11:31:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:21:01.374 11:31:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:21:01.374 11:31:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:21:01.374 11:31:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:21:01.374 11:31:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:21:01.374 11:31:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:21:01.374 11:31:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:21:01.374 11:31:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:21:01.374 11:31:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:21:01.374 11:31:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:21:01.374 11:31:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:21:01.374 11:31:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:21:01.374 11:31:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:21:01.374 11:31:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:21:01.374 11:31:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:21:01.374 11:31:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:21:01.374 11:31:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:21:01.374 11:31:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:21:01.374 11:31:45 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:21:01.374 11:31:45 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:21:04.663 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:21:04.663 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:21:04.663 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:21:04.663 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:21:04.663 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:21:04.663 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:21:04.663 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:21:04.663 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:21:04.663 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:21:04.663 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:21:04.663 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:21:04.663 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:21:04.663 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:21:04.663 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:21:04.663 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:21:04.663 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:21:09.945 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:21:09.945 11:31:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:21:09.945 11:31:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:21:09.945 11:31:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:21:09.945 11:31:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:21:09.945 11:31:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:21:09.945 11:31:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:21:09.945 11:31:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:21:09.945 11:31:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:21:09.945 11:31:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:21:09.945 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:21:09.945 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:21:09.945 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:21:09.945 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:21:09.945 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:21:09.945 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:21:09.945 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:21:09.945 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:21:09.945 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:21:09.945 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.945 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.945 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293388 kB' 'MemFree: 71717904 kB' 'MemAvailable: 75707888 kB' 'Buffers: 9244 kB' 'Cached: 15958564 kB' 'SwapCached: 0 kB' 'Active: 12888184 kB' 'Inactive: 3739908 kB' 'Active(anon): 12337852 kB' 'Inactive(anon): 0 kB' 'Active(file): 550332 kB' 'Inactive(file): 3739908 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 663316 kB' 'Mapped: 192956 kB' 'Shmem: 11677568 kB' 'KReclaimable: 535956 kB' 'Slab: 879876 kB' 'SReclaimable: 535956 kB' 'SUnreclaim: 343920 kB' 'KernelStack: 17008 kB' 'PageTables: 10528 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486720 kB' 'Committed_AS: 13741228 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 201784 kB' 'VmallocChunk: 0 kB' 'Percpu: 67520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 791884 kB' 'DirectMap2M: 31389696 kB' 'DirectMap1G: 69206016 kB' 00:21:09.945 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:09.945 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.945 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.945 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.945 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:09.945 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.945 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.945 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.945 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:09.945 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.945 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.945 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.945 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:09.945 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.945 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.945 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.945 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:09.945 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.945 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.945 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.945 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:09.945 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.945 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.945 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.945 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:09.945 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.945 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.945 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.945 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:09.945 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.945 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.945 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.945 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:09.945 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.945 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.945 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.945 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:09.945 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.945 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.945 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.945 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:09.945 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.945 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.945 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.945 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:09.945 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.945 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.945 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.945 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:09.945 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.945 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.945 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.945 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:09.945 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.945 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.945 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.946 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:09.946 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.946 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.946 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.946 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:09.946 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.946 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.946 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.946 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:09.946 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.946 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.946 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.946 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:09.946 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.946 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.946 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.946 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:09.946 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.946 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.946 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.946 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:09.946 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.946 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.946 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.946 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:09.946 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.946 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.946 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.946 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:09.946 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.946 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.946 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.946 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:09.946 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.946 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.946 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.946 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:09.946 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.946 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.946 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.946 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:09.946 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.946 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.946 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.946 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:09.946 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.946 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.946 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.946 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:09.946 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.946 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.946 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.946 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:09.946 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.946 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.946 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.946 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:09.946 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.946 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.946 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.946 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:09.946 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.946 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.946 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.946 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:09.946 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.946 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.946 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.946 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:09.946 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.946 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.946 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.946 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:09.946 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.946 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.946 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.946 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:09.946 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.946 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.946 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.946 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:09.946 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.946 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.946 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.946 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:09.946 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.946 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.946 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.946 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:09.946 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.946 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.946 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.946 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:09.946 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.946 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.946 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.946 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:09.946 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.946 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.946 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.946 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:09.946 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.946 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.946 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.946 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:09.946 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:21:09.946 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:21:09.946 11:31:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:21:09.946 11:31:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:21:09.946 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:21:09.946 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:21:09.946 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:21:09.946 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:21:09.946 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:21:09.946 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:21:09.946 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:21:09.946 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:21:09.946 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:21:09.946 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.946 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.947 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293388 kB' 'MemFree: 71720612 kB' 'MemAvailable: 75710596 kB' 'Buffers: 9244 kB' 'Cached: 15958568 kB' 'SwapCached: 0 kB' 'Active: 12888104 kB' 'Inactive: 3739908 kB' 'Active(anon): 12337772 kB' 'Inactive(anon): 0 kB' 'Active(file): 550332 kB' 'Inactive(file): 3739908 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 663740 kB' 'Mapped: 192864 kB' 'Shmem: 11677572 kB' 'KReclaimable: 535956 kB' 'Slab: 879820 kB' 'SReclaimable: 535956 kB' 'SUnreclaim: 343864 kB' 'KernelStack: 17264 kB' 'PageTables: 10664 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486720 kB' 'Committed_AS: 13741244 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 201688 kB' 'VmallocChunk: 0 kB' 'Percpu: 67520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 791884 kB' 'DirectMap2M: 31389696 kB' 'DirectMap1G: 69206016 kB' 00:21:09.947 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:09.947 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.947 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.947 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.947 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:09.947 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.947 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.947 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.947 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:09.947 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.947 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.947 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.947 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:09.947 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.947 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.947 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.947 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:09.947 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.947 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.947 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.947 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:09.947 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.947 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.947 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.947 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:09.947 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.947 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.947 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.947 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:09.947 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.947 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.947 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.947 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:09.947 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.947 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.947 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.947 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:09.947 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.947 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.947 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.947 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:09.947 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.947 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.947 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.947 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:09.947 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.947 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.947 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.947 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:09.947 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.947 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.947 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.947 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:09.947 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.947 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.947 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.947 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:09.947 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.947 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.947 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.947 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:09.947 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.947 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.947 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.947 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:09.947 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.947 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.947 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.947 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:09.947 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.947 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.947 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.947 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:09.947 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.947 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.947 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.947 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:09.947 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.947 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.947 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.947 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:09.947 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.947 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.947 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.947 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:09.947 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.947 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.947 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.947 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:09.947 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.947 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.947 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.947 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:09.947 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.947 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.947 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.947 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:09.947 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.947 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.947 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.948 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:09.948 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.948 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.948 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.948 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:09.948 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.948 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.948 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.948 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:09.948 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.948 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.948 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.948 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:09.948 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.948 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.948 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.948 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:09.948 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.948 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.948 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.948 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:09.948 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.948 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.948 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.948 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:09.948 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.948 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.948 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.948 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:09.948 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.948 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.948 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.948 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:09.948 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.948 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.948 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.948 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:09.948 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.948 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.948 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.948 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:09.948 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.948 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.948 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.948 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:09.948 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.948 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.948 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.948 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:09.948 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.948 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.948 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.948 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:09.948 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.948 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.948 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.948 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:09.948 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.948 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.948 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.948 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:09.948 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.948 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.948 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.948 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:09.948 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.948 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.948 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.948 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:09.948 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.948 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.948 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.948 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:09.948 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.948 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.948 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.948 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:09.948 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.948 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.948 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.948 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:09.948 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.948 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.948 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.948 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:09.948 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.948 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.948 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.948 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:09.948 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.948 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.948 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.948 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:09.948 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.948 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.948 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.948 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:09.948 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.948 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.948 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.948 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:09.948 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.948 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.948 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.948 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:09.948 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:21:09.948 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:21:09.948 11:31:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:21:09.948 11:31:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:21:09.948 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:21:09.948 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:21:09.948 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:21:09.948 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:21:09.948 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:21:09.948 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:21:09.948 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:21:09.948 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:21:09.948 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:21:09.948 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.948 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.949 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293388 kB' 'MemFree: 71720916 kB' 'MemAvailable: 75710900 kB' 'Buffers: 9244 kB' 'Cached: 15958568 kB' 'SwapCached: 0 kB' 'Active: 12888024 kB' 'Inactive: 3739908 kB' 'Active(anon): 12337692 kB' 'Inactive(anon): 0 kB' 'Active(file): 550332 kB' 'Inactive(file): 3739908 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 663660 kB' 'Mapped: 192864 kB' 'Shmem: 11677572 kB' 'KReclaimable: 535956 kB' 'Slab: 879820 kB' 'SReclaimable: 535956 kB' 'SUnreclaim: 343864 kB' 'KernelStack: 17104 kB' 'PageTables: 11080 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486720 kB' 'Committed_AS: 13740144 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 201672 kB' 'VmallocChunk: 0 kB' 'Percpu: 67520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 791884 kB' 'DirectMap2M: 31389696 kB' 'DirectMap1G: 69206016 kB' 00:21:09.949 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:09.949 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.949 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.949 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.949 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:09.949 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.949 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.949 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.949 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:09.949 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.949 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.949 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.949 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:09.949 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.949 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.949 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.949 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:09.949 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.949 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.949 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.949 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:09.949 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.949 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.949 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.949 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:09.949 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.949 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.949 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.949 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:09.949 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.949 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.949 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.949 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:09.949 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.949 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.949 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.949 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:09.949 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.949 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.949 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.949 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:09.949 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.949 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.949 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.949 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:09.949 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.949 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.949 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.949 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:09.949 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.949 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.949 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.949 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:09.949 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.949 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.949 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.949 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:09.949 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.949 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.949 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.949 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:09.949 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.949 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.949 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.949 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:09.949 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.949 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.949 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.949 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:09.949 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.949 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.949 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.949 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:09.949 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.949 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.949 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.949 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:09.949 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.949 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.949 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.949 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:09.949 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.949 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.949 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.949 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:09.949 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.949 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.949 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.949 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:09.950 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.950 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.950 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.950 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:09.950 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.950 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.950 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.950 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:09.950 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.950 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.950 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.950 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:09.950 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.950 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.950 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.950 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:09.950 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.950 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.950 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.950 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:09.950 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.950 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.950 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.950 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:09.950 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.950 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.950 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.950 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:09.950 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.950 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.950 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.950 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:09.950 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.950 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.950 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.950 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:09.950 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.950 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.950 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.950 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:09.950 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.950 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.950 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.950 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:09.950 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.950 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.950 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.950 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:09.950 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.950 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.950 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.950 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:09.950 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.950 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.950 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.950 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:09.950 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.950 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.950 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.950 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:09.950 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.950 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.950 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.950 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:09.950 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.950 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.950 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.950 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:09.950 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.950 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.950 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.950 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:09.950 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.950 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.950 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.950 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:09.950 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.950 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.950 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.950 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:09.950 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.950 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.950 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.950 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:09.950 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.950 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.950 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.950 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:09.950 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.950 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.950 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.950 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:09.950 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.950 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.950 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.950 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:09.950 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.950 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.950 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.950 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:09.950 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.950 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.950 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.950 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:09.950 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.950 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.950 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.950 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:09.950 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.950 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.950 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.950 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:09.950 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:21:09.950 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:21:09.950 11:31:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:21:09.950 11:31:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:21:09.950 nr_hugepages=1024 00:21:09.951 11:31:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:21:09.951 resv_hugepages=0 00:21:09.951 11:31:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:21:09.951 surplus_hugepages=0 00:21:09.951 11:31:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:21:09.951 anon_hugepages=0 00:21:09.951 11:31:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:21:09.951 11:31:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:21:09.951 11:31:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:21:09.951 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:21:09.951 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:21:09.951 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:21:09.951 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:21:09.951 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:21:09.951 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:21:09.951 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:21:09.951 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:21:09.951 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:21:09.951 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.951 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.951 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293388 kB' 'MemFree: 71721020 kB' 'MemAvailable: 75711004 kB' 'Buffers: 9244 kB' 'Cached: 15958568 kB' 'SwapCached: 0 kB' 'Active: 12887972 kB' 'Inactive: 3739908 kB' 'Active(anon): 12337640 kB' 'Inactive(anon): 0 kB' 'Active(file): 550332 kB' 'Inactive(file): 3739908 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 663568 kB' 'Mapped: 192872 kB' 'Shmem: 11677572 kB' 'KReclaimable: 535956 kB' 'Slab: 879740 kB' 'SReclaimable: 535956 kB' 'SUnreclaim: 343784 kB' 'KernelStack: 16768 kB' 'PageTables: 10640 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486720 kB' 'Committed_AS: 13741288 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 201672 kB' 'VmallocChunk: 0 kB' 'Percpu: 67520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 791884 kB' 'DirectMap2M: 31389696 kB' 'DirectMap1G: 69206016 kB' 00:21:09.951 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:09.951 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.951 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.951 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.951 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:09.951 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.951 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.951 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.951 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:09.951 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.951 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.951 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.951 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:09.951 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.951 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.951 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.951 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:09.951 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.951 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.951 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.951 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:09.951 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.951 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.951 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.951 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:09.951 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.951 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.951 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.951 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:09.951 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.951 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.951 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.951 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:09.951 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.951 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.951 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.951 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:09.951 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.951 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.951 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.951 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:09.951 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.951 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.951 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.951 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:09.951 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.951 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.951 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.951 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:09.951 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.951 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.951 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.951 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:09.951 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.951 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.951 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.951 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:09.951 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.951 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.951 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.951 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:09.951 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.951 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.951 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.951 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:09.951 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.951 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.951 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.951 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:09.951 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.951 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.951 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.951 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:09.951 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.951 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.951 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.951 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:09.951 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.951 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.951 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.951 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:09.951 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.951 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.952 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.952 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:09.952 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.952 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.952 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.952 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:09.952 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.952 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.952 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.952 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:09.952 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.952 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.952 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.952 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:09.952 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.952 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.952 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.952 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:09.952 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.952 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.952 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.952 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:09.952 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.952 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.952 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.952 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:09.952 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.952 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.952 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.952 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:09.952 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.952 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.952 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.952 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:09.952 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.952 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.952 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.952 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:09.952 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.952 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.952 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.952 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:09.952 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.952 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.952 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.952 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:09.952 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.952 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.952 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.952 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:09.952 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.952 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.952 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.952 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:09.952 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.952 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.952 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.952 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:09.952 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.952 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.952 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.952 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:09.952 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.952 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.952 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.952 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:09.952 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.952 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.952 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.952 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:09.952 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.952 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.952 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.952 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:09.952 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.952 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.952 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.952 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:09.952 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.952 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.952 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.952 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:09.952 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.952 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.952 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.952 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:09.952 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.952 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.952 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.952 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:09.952 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.952 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.952 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.952 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:09.952 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.952 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.952 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.952 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:09.952 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.952 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.952 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.952 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:09.952 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.952 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.952 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.952 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:09.952 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.952 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.952 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.952 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:09.952 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:21:09.952 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:21:09.952 11:31:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:21:09.952 11:31:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:21:09.952 11:31:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:21:09.952 11:31:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:21:09.952 11:31:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:21:09.952 11:31:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:21:09.953 11:31:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:21:09.953 11:31:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:21:09.953 11:31:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:21:09.953 11:31:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:21:09.953 11:31:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:21:09.953 11:31:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:21:09.953 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:21:09.953 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:21:09.953 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:21:09.953 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:21:09.953 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:21:09.953 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:21:09.953 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:21:09.953 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:21:09.953 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:21:09.953 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.953 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.953 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48116852 kB' 'MemFree: 36127944 kB' 'MemUsed: 11988908 kB' 'SwapCached: 0 kB' 'Active: 8631112 kB' 'Inactive: 191208 kB' 'Active(anon): 8245904 kB' 'Inactive(anon): 0 kB' 'Active(file): 385208 kB' 'Inactive(file): 191208 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8367504 kB' 'Mapped: 134692 kB' 'AnonPages: 458048 kB' 'Shmem: 7791088 kB' 'KernelStack: 10440 kB' 'PageTables: 8076 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 324460 kB' 'Slab: 525400 kB' 'SReclaimable: 324460 kB' 'SUnreclaim: 200940 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:21:09.953 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:09.953 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.953 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.953 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.953 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:09.953 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.953 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.953 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.953 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:09.953 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.953 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.953 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.953 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:09.953 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.953 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.953 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.953 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:09.953 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.953 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.953 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.953 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:09.953 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.953 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.953 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.953 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:09.953 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.953 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.953 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.953 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:09.953 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.953 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.953 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.953 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:09.953 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.953 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.953 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.953 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:09.953 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.953 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.953 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.953 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:09.953 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.953 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.953 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.953 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:09.953 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.953 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.953 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.953 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:09.953 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.953 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.953 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.953 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:09.953 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.953 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.953 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.953 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:09.953 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.953 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.953 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.953 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:09.953 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.953 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.953 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.953 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:09.953 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.953 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.953 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.953 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:09.953 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.953 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.953 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.953 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:09.953 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.953 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.953 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.953 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:09.953 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.953 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.953 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.953 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:09.953 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.953 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.953 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.953 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:09.953 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.953 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.953 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.953 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:09.953 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.953 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.953 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.953 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:09.954 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.954 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.954 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.954 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:09.954 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.954 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.954 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.954 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:09.954 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.954 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.954 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.954 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:09.954 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.954 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.954 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.954 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:09.954 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.954 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.954 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.954 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:09.954 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.954 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.954 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.954 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:09.954 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.954 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.954 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.954 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:09.954 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.954 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.954 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.954 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:09.954 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.954 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.954 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.954 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:09.954 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.954 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.954 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.954 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:09.954 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.954 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.954 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.954 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:09.954 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.954 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.954 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.954 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:09.954 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:21:09.954 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:21:09.954 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:21:09.954 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:09.954 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:21:09.954 11:31:53 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:21:09.954 11:31:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:21:09.954 11:31:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:21:09.954 11:31:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:21:09.954 11:31:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:21:09.954 11:31:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:21:09.954 node0=1024 expecting 1024 00:21:09.954 11:31:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:21:09.954 00:21:09.954 real 0m8.463s 00:21:09.954 user 0m1.352s 00:21:09.954 sys 0m2.318s 00:21:09.954 11:31:53 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1125 -- # xtrace_disable 00:21:09.954 11:31:53 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:21:09.954 ************************************ 00:21:09.954 END TEST default_setup 00:21:09.954 ************************************ 00:21:09.954 11:31:53 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:21:09.954 11:31:53 setup.sh.hugepages -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:21:09.954 11:31:53 setup.sh.hugepages -- common/autotest_common.sh@1106 -- # xtrace_disable 00:21:09.954 11:31:53 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:21:09.954 ************************************ 00:21:09.954 START TEST per_node_1G_alloc 00:21:09.954 ************************************ 00:21:09.954 11:31:53 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # per_node_1G_alloc 00:21:09.954 11:31:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:21:09.954 11:31:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:21:09.954 11:31:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:21:09.954 11:31:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:21:09.954 11:31:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:21:09.954 11:31:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:21:09.954 11:31:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:21:09.954 11:31:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:21:09.954 11:31:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:21:09.954 11:31:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:21:09.954 11:31:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:21:09.954 11:31:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:21:09.954 11:31:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:21:09.954 11:31:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:21:09.954 11:31:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:21:09.954 11:31:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:21:09.954 11:31:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:21:09.954 11:31:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:21:09.954 11:31:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:21:09.954 11:31:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:21:09.954 11:31:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:21:09.954 11:31:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:21:09.955 11:31:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:21:09.955 11:31:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:21:09.955 11:31:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:21:09.955 11:31:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:21:09.955 11:31:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:21:13.256 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:21:13.256 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:21:13.256 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:21:13.256 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:21:13.256 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:21:13.256 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:21:13.256 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:21:13.256 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:21:13.256 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:21:13.256 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:21:13.256 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:21:13.256 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:21:13.256 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:21:13.256 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:21:13.256 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:21:13.256 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:21:13.256 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:21:13.256 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:21:13.256 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:21:13.256 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:21:13.256 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:21:13.256 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:21:13.256 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:21:13.256 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:21:13.256 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:21:13.256 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:21:13.256 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:21:13.256 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:21:13.256 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:21:13.256 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:21:13.256 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:21:13.256 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:21:13.256 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:21:13.256 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:21:13.256 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:21:13.256 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:21:13.256 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.256 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.256 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293388 kB' 'MemFree: 71763632 kB' 'MemAvailable: 75753616 kB' 'Buffers: 9244 kB' 'Cached: 15958704 kB' 'SwapCached: 0 kB' 'Active: 12885556 kB' 'Inactive: 3739908 kB' 'Active(anon): 12335224 kB' 'Inactive(anon): 0 kB' 'Active(file): 550332 kB' 'Inactive(file): 3739908 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 660284 kB' 'Mapped: 192040 kB' 'Shmem: 11677708 kB' 'KReclaimable: 535956 kB' 'Slab: 879596 kB' 'SReclaimable: 535956 kB' 'SUnreclaim: 343640 kB' 'KernelStack: 16416 kB' 'PageTables: 8544 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486720 kB' 'Committed_AS: 13730956 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 201784 kB' 'VmallocChunk: 0 kB' 'Percpu: 67520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 791884 kB' 'DirectMap2M: 31389696 kB' 'DirectMap1G: 69206016 kB' 00:21:13.256 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:13.256 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.256 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.256 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.256 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:13.256 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.256 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.256 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.256 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:13.256 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.256 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.256 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.256 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:13.256 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.256 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.256 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.256 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:13.256 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.256 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.256 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.256 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:13.256 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.256 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.256 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.256 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:13.256 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.256 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.256 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.256 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:13.256 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.256 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.256 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.256 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:13.256 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.256 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.256 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.256 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:13.256 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.256 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.256 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.256 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:13.256 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.256 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.256 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.256 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:13.256 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.256 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.256 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.256 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:13.256 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.256 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.256 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.256 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:13.256 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.256 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.256 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.256 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:13.256 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.257 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.257 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.257 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:13.257 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.257 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.257 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.257 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:13.257 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.257 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.257 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.257 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:13.257 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.257 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.257 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.257 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:13.257 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.257 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.257 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.257 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:13.257 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.257 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.257 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.257 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:13.257 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.257 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.257 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.257 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:13.257 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.257 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.257 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.257 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:13.257 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.257 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.257 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.257 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:13.257 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.257 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.257 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.257 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:13.257 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.257 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.257 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.257 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:13.257 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.257 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.257 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.257 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:13.257 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.257 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.257 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.257 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:13.257 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.257 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.257 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.257 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:13.257 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.257 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.257 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.257 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:13.257 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.257 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.257 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.257 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:13.257 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.257 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.257 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.257 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:13.257 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.257 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.257 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.257 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:13.257 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.257 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.257 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.257 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:13.257 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.257 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.257 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.257 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:13.257 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.257 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.257 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.257 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:13.257 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.257 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.257 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.257 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:13.257 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.257 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.257 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.257 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:13.257 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.257 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.257 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.257 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:13.257 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.257 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.257 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.257 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:13.257 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.257 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.257 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.257 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:13.257 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:21:13.257 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:21:13.257 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:21:13.257 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:21:13.257 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:21:13.257 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:21:13.257 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:21:13.257 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:21:13.257 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:21:13.257 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:21:13.257 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:21:13.257 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:21:13.257 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:21:13.257 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.257 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.258 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293388 kB' 'MemFree: 71764404 kB' 'MemAvailable: 75754388 kB' 'Buffers: 9244 kB' 'Cached: 15958704 kB' 'SwapCached: 0 kB' 'Active: 12884428 kB' 'Inactive: 3739908 kB' 'Active(anon): 12334096 kB' 'Inactive(anon): 0 kB' 'Active(file): 550332 kB' 'Inactive(file): 3739908 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 659620 kB' 'Mapped: 191952 kB' 'Shmem: 11677708 kB' 'KReclaimable: 535956 kB' 'Slab: 879604 kB' 'SReclaimable: 535956 kB' 'SUnreclaim: 343648 kB' 'KernelStack: 16384 kB' 'PageTables: 8416 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486720 kB' 'Committed_AS: 13730972 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 201784 kB' 'VmallocChunk: 0 kB' 'Percpu: 67520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 791884 kB' 'DirectMap2M: 31389696 kB' 'DirectMap1G: 69206016 kB' 00:21:13.258 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:13.258 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.258 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.258 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.258 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:13.258 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.258 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.258 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.258 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:13.258 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.258 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.258 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.258 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:13.258 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.258 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.258 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.258 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:13.258 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.258 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.258 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.258 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:13.258 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.258 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.258 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.258 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:13.258 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.258 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.258 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.258 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:13.258 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.258 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.258 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.258 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:13.258 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.258 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.258 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.258 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:13.258 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.258 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.258 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.258 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:13.258 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.258 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.258 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.258 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:13.258 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.258 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.258 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.258 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:13.258 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.258 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.258 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.258 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:13.258 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.258 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.258 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.258 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:13.258 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.258 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.258 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.258 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:13.258 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.258 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.258 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.258 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:13.258 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.258 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.258 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.258 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:13.258 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.258 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.258 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.258 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:13.258 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.258 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.258 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.258 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:13.258 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.258 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.258 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.258 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:13.258 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.258 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.258 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.258 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:13.258 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.258 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.258 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.258 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:13.258 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.258 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.258 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.258 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:13.258 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.258 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.258 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.258 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:13.258 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.258 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.258 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.258 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:13.258 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.258 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.258 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.258 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:13.258 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.258 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.259 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.259 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:13.259 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.259 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.259 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.259 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:13.259 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.259 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.259 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.259 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:13.259 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.259 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.259 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.259 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:13.259 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.259 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.259 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.259 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:13.259 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.259 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.259 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.259 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:13.259 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.259 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.259 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.259 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:13.259 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.259 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.259 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.259 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:13.259 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.259 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.259 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.259 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:13.259 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.259 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.259 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.259 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:13.259 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.259 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.259 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.259 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:13.259 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.259 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.259 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.259 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:13.259 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.259 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.259 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.259 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:13.259 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.259 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.259 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.259 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:13.259 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.259 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.259 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.259 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:13.259 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.259 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.259 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.259 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:13.259 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.259 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.259 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.259 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:13.259 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.259 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.259 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.259 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:13.259 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.259 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.259 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.259 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:13.259 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.259 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.259 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.259 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:13.259 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.259 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.259 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.259 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:13.259 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.259 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.259 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.259 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:13.259 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.259 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.259 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.259 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:13.259 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.259 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.259 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.259 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:13.259 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.259 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.259 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.259 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:13.259 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:21:13.259 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:21:13.259 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:21:13.259 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:21:13.259 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:21:13.259 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:21:13.259 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:21:13.259 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:21:13.259 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:21:13.259 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:21:13.259 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:21:13.259 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:21:13.259 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:21:13.259 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.259 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.260 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293388 kB' 'MemFree: 71764404 kB' 'MemAvailable: 75754388 kB' 'Buffers: 9244 kB' 'Cached: 15958704 kB' 'SwapCached: 0 kB' 'Active: 12884468 kB' 'Inactive: 3739908 kB' 'Active(anon): 12334136 kB' 'Inactive(anon): 0 kB' 'Active(file): 550332 kB' 'Inactive(file): 3739908 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 659656 kB' 'Mapped: 191952 kB' 'Shmem: 11677708 kB' 'KReclaimable: 535956 kB' 'Slab: 879604 kB' 'SReclaimable: 535956 kB' 'SUnreclaim: 343648 kB' 'KernelStack: 16400 kB' 'PageTables: 8460 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486720 kB' 'Committed_AS: 13730996 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 201784 kB' 'VmallocChunk: 0 kB' 'Percpu: 67520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 791884 kB' 'DirectMap2M: 31389696 kB' 'DirectMap1G: 69206016 kB' 00:21:13.260 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:13.260 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.260 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.260 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.260 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:13.260 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.260 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.260 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.260 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:13.260 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.260 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.260 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.260 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:13.260 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.260 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.260 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.260 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:13.260 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.260 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.260 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.260 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:13.260 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.260 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.260 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.260 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:13.260 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.260 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.260 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.260 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:13.260 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.260 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.260 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.260 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:13.260 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.260 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.260 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.260 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:13.260 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.260 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.260 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.260 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:13.260 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.260 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.260 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.260 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:13.260 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.260 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.260 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.260 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:13.260 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.260 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.260 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.260 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:13.260 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.260 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.260 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.260 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:13.260 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.260 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.260 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.260 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:13.260 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.260 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.260 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.260 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:13.260 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.260 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.260 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.260 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:13.260 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.260 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.260 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.260 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:13.260 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.260 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.260 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.260 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:13.260 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.260 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.260 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.260 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:13.260 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.260 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.260 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.260 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:13.260 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.261 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.261 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.261 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:13.261 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.261 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.261 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.261 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:13.261 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.261 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.261 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.261 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:13.261 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.261 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.261 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.261 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:13.261 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.261 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.261 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.261 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:13.261 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.261 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.261 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.261 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:13.261 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.261 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.261 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.261 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:13.261 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.261 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.261 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.261 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:13.261 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.261 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.261 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.261 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:13.261 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.261 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.261 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.261 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:13.261 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.261 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.261 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.261 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:13.261 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.261 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.261 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.261 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:13.261 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.261 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.261 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.261 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:13.261 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.261 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.261 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.261 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:13.261 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.261 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.261 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.261 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:13.261 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.261 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.261 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.261 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:13.261 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.261 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.261 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.261 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:13.261 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.261 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.261 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.261 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:13.261 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.261 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.261 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.261 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:13.261 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.261 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.261 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.261 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:13.261 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.261 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.261 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.261 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:13.261 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.261 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.261 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.261 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:13.261 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.261 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.261 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.261 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:13.261 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.261 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.261 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.261 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:13.261 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.261 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.261 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.261 11:31:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:13.261 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.261 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.261 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.261 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:13.261 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.261 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.261 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.261 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:13.261 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.261 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.261 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.261 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:13.261 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.261 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.261 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.261 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:13.261 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:21:13.261 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:21:13.261 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:21:13.261 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:21:13.261 nr_hugepages=1024 00:21:13.262 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:21:13.262 resv_hugepages=0 00:21:13.262 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:21:13.262 surplus_hugepages=0 00:21:13.262 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:21:13.262 anon_hugepages=0 00:21:13.262 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:21:13.262 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:21:13.262 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:21:13.262 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:21:13.262 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:21:13.262 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:21:13.262 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:21:13.262 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:21:13.262 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:21:13.262 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:21:13.262 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:21:13.262 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:21:13.262 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.262 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.262 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293388 kB' 'MemFree: 71764404 kB' 'MemAvailable: 75754388 kB' 'Buffers: 9244 kB' 'Cached: 15958764 kB' 'SwapCached: 0 kB' 'Active: 12884492 kB' 'Inactive: 3739908 kB' 'Active(anon): 12334160 kB' 'Inactive(anon): 0 kB' 'Active(file): 550332 kB' 'Inactive(file): 3739908 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 659580 kB' 'Mapped: 191952 kB' 'Shmem: 11677768 kB' 'KReclaimable: 535956 kB' 'Slab: 879604 kB' 'SReclaimable: 535956 kB' 'SUnreclaim: 343648 kB' 'KernelStack: 16384 kB' 'PageTables: 8416 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486720 kB' 'Committed_AS: 13731016 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 201784 kB' 'VmallocChunk: 0 kB' 'Percpu: 67520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 791884 kB' 'DirectMap2M: 31389696 kB' 'DirectMap1G: 69206016 kB' 00:21:13.262 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:13.262 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.262 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.262 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.262 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:13.262 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.262 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.262 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.262 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:13.262 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.262 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.262 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.262 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:13.262 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.262 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.262 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.262 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:13.262 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.262 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.262 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.262 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:13.262 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.262 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.262 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.262 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:13.262 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.262 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.262 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.262 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:13.262 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.262 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.262 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.262 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:13.262 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.262 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.262 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.262 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:13.262 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.262 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.262 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.262 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:13.262 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.262 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.262 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.262 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:13.262 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.262 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.262 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.262 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:13.262 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.262 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.262 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.262 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:13.262 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.262 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.262 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.262 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:13.262 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.262 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.262 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.262 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:13.262 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.262 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.262 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.262 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:13.262 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.262 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.262 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.262 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:13.262 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.262 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.262 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.262 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:13.262 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.262 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.262 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.262 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:13.262 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.262 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.262 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.262 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:13.262 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.262 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.262 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.262 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:13.262 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.262 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.262 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.263 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:13.263 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.263 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.263 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.263 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:13.263 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.263 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.263 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.263 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:13.263 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.263 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.263 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.263 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:13.263 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.263 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.263 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.263 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:13.263 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.263 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.263 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.263 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:13.263 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.263 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.263 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.263 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:13.263 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.263 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.263 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.263 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:13.263 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.263 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.263 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.263 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:13.263 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.263 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.263 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.263 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:13.263 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.263 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.263 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.263 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:13.263 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.263 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.263 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.263 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:13.263 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.263 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.263 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.263 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:13.263 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.263 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.263 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.263 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:13.263 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.263 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.263 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.263 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:13.263 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.263 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.263 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.263 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:13.263 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.263 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.263 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.263 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:13.263 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.263 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.263 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.263 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:13.263 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.263 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.263 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.263 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:13.263 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.263 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.263 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.263 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:13.263 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.263 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.263 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.263 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:13.263 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.263 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.263 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.263 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:13.263 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.263 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.263 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.263 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:13.263 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.263 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.263 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.263 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:13.263 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.263 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.263 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.263 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:13.263 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.263 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.263 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.263 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:13.263 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.263 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.263 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.263 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:13.263 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:21:13.263 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:21:13.263 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:21:13.263 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:21:13.263 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:21:13.263 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:21:13.263 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:21:13.263 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:21:13.263 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:21:13.263 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:21:13.263 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:21:13.263 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:21:13.263 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:21:13.264 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:21:13.264 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:21:13.264 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:21:13.264 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:21:13.264 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:21:13.264 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:21:13.264 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:21:13.264 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:21:13.264 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:21:13.264 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:21:13.264 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.264 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.264 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48116852 kB' 'MemFree: 37183244 kB' 'MemUsed: 10933608 kB' 'SwapCached: 0 kB' 'Active: 8629276 kB' 'Inactive: 191208 kB' 'Active(anon): 8244068 kB' 'Inactive(anon): 0 kB' 'Active(file): 385208 kB' 'Inactive(file): 191208 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8367604 kB' 'Mapped: 134160 kB' 'AnonPages: 456012 kB' 'Shmem: 7791188 kB' 'KernelStack: 9880 kB' 'PageTables: 5864 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 324460 kB' 'Slab: 525320 kB' 'SReclaimable: 324460 kB' 'SUnreclaim: 200860 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:21:13.264 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:13.264 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.264 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.264 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.264 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:13.264 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.264 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.264 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.264 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:13.264 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.264 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.264 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.264 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:13.264 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.264 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.264 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.264 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:13.264 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.264 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.264 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.264 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:13.264 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.264 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.264 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.264 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:13.264 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.264 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.264 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.264 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:13.264 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.264 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.264 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.264 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:13.264 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.264 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.264 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.264 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:13.264 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.264 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.264 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.264 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:13.264 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.264 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.264 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.264 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:13.264 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.264 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.264 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.264 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:13.264 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.264 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.264 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.264 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:13.264 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.264 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.264 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.264 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:13.264 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.264 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.264 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.264 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:13.264 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.264 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.264 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.264 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:13.264 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.264 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.264 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.264 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:13.264 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.264 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.264 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.264 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:13.264 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.264 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.264 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.264 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:13.264 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.264 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.264 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.264 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:13.264 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.264 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.264 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.264 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:13.264 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.264 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.264 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.264 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:13.264 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.264 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.264 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.264 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:13.264 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.264 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.264 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.264 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:13.264 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.264 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.265 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.265 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:13.265 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.265 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.265 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.265 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:13.265 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.265 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.265 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.265 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:13.265 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.265 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.265 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.265 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:13.265 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.265 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.265 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.265 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:13.265 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.265 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.265 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.265 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:13.265 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.265 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.265 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.265 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:13.265 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.265 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.265 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.265 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:13.265 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.265 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.265 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.265 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:13.265 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.265 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.265 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.265 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:13.265 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.265 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.265 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.265 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:13.265 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.265 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.265 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.265 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:13.265 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:21:13.265 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:21:13.265 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:21:13.265 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:21:13.265 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:21:13.265 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:21:13.265 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:21:13.265 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:21:13.265 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:21:13.265 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:21:13.265 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:21:13.265 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:21:13.265 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:21:13.265 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:21:13.265 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:21:13.265 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.265 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.265 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44176536 kB' 'MemFree: 34581808 kB' 'MemUsed: 9594728 kB' 'SwapCached: 0 kB' 'Active: 4254896 kB' 'Inactive: 3548700 kB' 'Active(anon): 4089772 kB' 'Inactive(anon): 0 kB' 'Active(file): 165124 kB' 'Inactive(file): 3548700 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7600424 kB' 'Mapped: 57792 kB' 'AnonPages: 203212 kB' 'Shmem: 3886600 kB' 'KernelStack: 6488 kB' 'PageTables: 2508 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 211496 kB' 'Slab: 354284 kB' 'SReclaimable: 211496 kB' 'SUnreclaim: 142788 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:21:13.265 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:13.265 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.265 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.265 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.265 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:13.265 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.265 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.265 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.265 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:13.265 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.265 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.265 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.265 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:13.265 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.265 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.265 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.265 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:13.265 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.265 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.265 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.265 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:13.265 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.265 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.265 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.265 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:13.266 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.266 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.266 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.266 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:13.266 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.266 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.266 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.266 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:13.266 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.266 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.266 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.266 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:13.266 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.266 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.266 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.266 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:13.266 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.266 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.266 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.266 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:13.266 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.266 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.266 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.266 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:13.266 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.266 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.266 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.266 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:13.266 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.266 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.266 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.266 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:13.266 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.266 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.266 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.266 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:13.266 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.266 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.266 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.266 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:13.266 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.266 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.266 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.266 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:13.266 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.266 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.266 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.266 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:13.266 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.266 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.266 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.266 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:13.266 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.266 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.266 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.266 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:13.266 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.266 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.266 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.266 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:13.266 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.266 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.266 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.266 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:13.266 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.266 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.266 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.266 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:13.266 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.266 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.266 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.266 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:13.266 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.266 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.266 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.266 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:13.266 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.266 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.266 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.266 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:13.266 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.266 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.266 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.266 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:13.266 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.266 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.266 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.266 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:13.266 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.266 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.266 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.266 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:13.266 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.266 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.266 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.266 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:13.266 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.266 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.266 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.266 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:13.266 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.266 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.266 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.266 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:13.266 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.266 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.266 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.266 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:13.266 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.266 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.266 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.266 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:13.266 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.266 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.266 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.266 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:13.266 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:21:13.266 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:13.266 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:13.266 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:13.267 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:21:13.267 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:21:13.267 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:21:13.267 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:21:13.267 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:21:13.267 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:21:13.267 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:21:13.267 node0=512 expecting 512 00:21:13.267 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:21:13.267 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:21:13.267 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:21:13.267 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:21:13.267 node1=512 expecting 512 00:21:13.267 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:21:13.267 00:21:13.267 real 0m3.452s 00:21:13.267 user 0m1.244s 00:21:13.267 sys 0m2.245s 00:21:13.267 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:21:13.267 11:31:57 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:21:13.267 ************************************ 00:21:13.267 END TEST per_node_1G_alloc 00:21:13.267 ************************************ 00:21:13.267 11:31:57 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:21:13.267 11:31:57 setup.sh.hugepages -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:21:13.267 11:31:57 setup.sh.hugepages -- common/autotest_common.sh@1106 -- # xtrace_disable 00:21:13.267 11:31:57 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:21:13.267 ************************************ 00:21:13.267 START TEST even_2G_alloc 00:21:13.267 ************************************ 00:21:13.267 11:31:57 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # even_2G_alloc 00:21:13.267 11:31:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:21:13.267 11:31:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:21:13.267 11:31:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:21:13.267 11:31:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:21:13.267 11:31:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:21:13.267 11:31:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:21:13.267 11:31:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:21:13.267 11:31:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:21:13.267 11:31:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:21:13.267 11:31:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:21:13.267 11:31:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:21:13.267 11:31:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:21:13.267 11:31:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:21:13.267 11:31:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:21:13.267 11:31:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:21:13.267 11:31:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:21:13.267 11:31:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:21:13.267 11:31:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:21:13.267 11:31:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:21:13.267 11:31:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:21:13.267 11:31:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:21:13.267 11:31:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:21:13.267 11:31:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:21:13.267 11:31:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:21:13.267 11:31:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:21:13.267 11:31:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:21:13.267 11:31:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:21:13.267 11:31:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:21:17.465 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:21:17.465 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:21:17.465 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:21:17.465 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:21:17.465 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:21:17.465 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:21:17.465 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:21:17.465 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:21:17.465 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:21:17.465 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:21:17.465 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:21:17.465 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:21:17.465 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:21:17.465 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:21:17.465 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:21:17.465 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:21:17.465 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:21:17.465 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:21:17.465 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:21:17.465 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:21:17.465 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:21:17.465 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:21:17.465 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:21:17.465 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:21:17.465 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:21:17.465 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:21:17.465 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:21:17.465 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:21:17.465 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:21:17.465 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:21:17.465 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:21:17.465 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:21:17.465 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:21:17.465 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:21:17.465 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:21:17.465 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.465 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.465 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293388 kB' 'MemFree: 71810804 kB' 'MemAvailable: 75800788 kB' 'Buffers: 9244 kB' 'Cached: 15958860 kB' 'SwapCached: 0 kB' 'Active: 12885960 kB' 'Inactive: 3739908 kB' 'Active(anon): 12335628 kB' 'Inactive(anon): 0 kB' 'Active(file): 550332 kB' 'Inactive(file): 3739908 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 660552 kB' 'Mapped: 192056 kB' 'Shmem: 11677864 kB' 'KReclaimable: 535956 kB' 'Slab: 880152 kB' 'SReclaimable: 535956 kB' 'SUnreclaim: 344196 kB' 'KernelStack: 16384 kB' 'PageTables: 8444 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486720 kB' 'Committed_AS: 13731632 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 201688 kB' 'VmallocChunk: 0 kB' 'Percpu: 67520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 791884 kB' 'DirectMap2M: 31389696 kB' 'DirectMap1G: 69206016 kB' 00:21:17.465 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:17.465 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.465 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.465 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.465 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:17.465 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.465 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.465 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.465 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:17.465 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.465 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.465 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.465 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:17.465 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.465 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.465 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.465 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:17.465 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.465 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.465 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.466 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:17.466 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.466 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.466 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.466 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:17.466 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.466 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.466 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.466 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:17.466 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.466 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.466 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.466 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:17.466 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.466 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.466 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.466 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:17.466 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.466 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.466 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.466 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:17.466 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.466 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.466 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.466 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:17.466 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.466 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.466 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.466 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:17.466 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.466 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.466 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.466 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:17.466 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.466 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.466 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.466 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:17.466 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.466 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.466 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.466 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:17.466 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.466 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.466 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.466 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:17.466 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.466 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.466 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.466 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:17.466 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.466 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.466 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.466 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:17.466 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.466 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.466 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.466 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:17.466 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.466 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.466 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.466 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:17.466 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.466 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.466 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.466 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:17.466 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.466 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.466 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.466 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:17.466 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.466 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.466 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.466 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:17.466 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.466 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.466 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.466 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:17.466 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.466 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.466 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.466 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:17.466 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.466 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.466 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.466 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:17.466 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.466 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.466 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.466 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:17.466 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.466 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.466 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.466 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:17.466 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.466 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.466 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.466 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:17.466 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.466 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.466 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.466 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:17.466 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.466 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.466 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.466 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:17.466 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.466 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.466 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.466 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:17.466 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.466 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.466 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.466 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:17.466 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.466 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.466 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.466 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:17.466 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.466 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.466 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.466 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:17.466 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.466 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.466 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.467 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:17.467 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.467 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.467 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.467 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:17.467 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.467 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.467 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.467 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:17.467 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.467 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.467 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.467 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:17.467 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.467 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.467 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.467 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:17.467 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:21:17.467 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:21:17.467 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:21:17.467 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:21:17.467 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:21:17.467 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:21:17.467 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:21:17.467 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:21:17.467 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:21:17.467 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:21:17.467 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:21:17.467 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:21:17.467 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:21:17.467 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.467 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.467 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293388 kB' 'MemFree: 71810856 kB' 'MemAvailable: 75800840 kB' 'Buffers: 9244 kB' 'Cached: 15958864 kB' 'SwapCached: 0 kB' 'Active: 12885700 kB' 'Inactive: 3739908 kB' 'Active(anon): 12335368 kB' 'Inactive(anon): 0 kB' 'Active(file): 550332 kB' 'Inactive(file): 3739908 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 660788 kB' 'Mapped: 191964 kB' 'Shmem: 11677868 kB' 'KReclaimable: 535956 kB' 'Slab: 880136 kB' 'SReclaimable: 535956 kB' 'SUnreclaim: 344180 kB' 'KernelStack: 16384 kB' 'PageTables: 8420 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486720 kB' 'Committed_AS: 13731648 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 201672 kB' 'VmallocChunk: 0 kB' 'Percpu: 67520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 791884 kB' 'DirectMap2M: 31389696 kB' 'DirectMap1G: 69206016 kB' 00:21:17.467 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:17.467 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.467 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.467 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.467 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:17.467 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.467 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.467 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.467 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:17.467 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.467 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.467 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.467 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:17.467 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.467 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.467 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.467 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:17.467 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.467 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.467 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.467 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:17.467 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.467 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.467 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.467 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:17.467 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.467 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.467 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.467 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:17.467 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.467 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.467 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.467 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:17.467 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.467 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.467 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.467 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:17.467 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.467 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.467 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.467 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:17.467 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.467 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.467 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.467 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:17.467 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.467 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.467 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.467 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:17.467 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.467 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.467 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.467 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:17.467 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.467 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.467 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.467 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:17.467 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.467 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.467 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.467 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:17.467 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.467 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.467 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.467 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:17.467 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.467 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.467 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.467 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:17.467 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.467 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.467 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.467 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:17.467 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.467 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.467 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.468 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:17.468 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.468 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.468 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.468 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:17.468 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.468 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.468 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.468 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:17.468 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.468 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.468 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.468 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:17.468 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.468 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.468 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.468 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:17.468 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.468 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.468 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.468 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:17.468 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.468 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.468 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.468 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:17.468 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.468 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.468 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.468 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:17.468 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.468 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.468 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.468 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:17.468 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.468 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.468 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.468 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:17.468 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.468 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.468 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.468 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:17.468 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.468 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.468 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.468 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:17.468 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.468 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.468 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.468 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:17.468 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.468 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.468 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.468 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:17.468 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.468 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.468 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.468 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:17.468 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.468 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.468 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.468 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:17.468 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.468 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.468 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.468 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:17.468 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.468 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.468 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.468 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:17.468 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.468 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.468 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.468 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:17.468 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.468 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.468 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.468 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:17.468 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.468 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.468 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.468 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:17.468 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.468 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.468 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.468 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:17.468 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.468 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.468 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.468 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:17.468 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.468 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.468 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.468 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:17.468 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.468 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.468 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.468 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:17.468 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.468 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.468 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.468 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:17.468 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.468 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.468 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.468 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:17.468 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.468 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.468 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.468 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:17.468 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.468 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.468 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.468 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:17.468 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.468 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.468 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.468 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:17.468 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.468 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.468 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.468 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:17.468 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.468 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.469 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.469 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:17.469 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.469 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.469 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.469 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:17.469 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:21:17.469 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:21:17.469 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:21:17.469 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:21:17.469 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:21:17.469 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:21:17.469 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:21:17.469 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:21:17.469 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:21:17.469 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:21:17.469 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:21:17.469 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:21:17.469 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:21:17.469 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293388 kB' 'MemFree: 71811576 kB' 'MemAvailable: 75801560 kB' 'Buffers: 9244 kB' 'Cached: 15958884 kB' 'SwapCached: 0 kB' 'Active: 12885704 kB' 'Inactive: 3739908 kB' 'Active(anon): 12335372 kB' 'Inactive(anon): 0 kB' 'Active(file): 550332 kB' 'Inactive(file): 3739908 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 660788 kB' 'Mapped: 191964 kB' 'Shmem: 11677888 kB' 'KReclaimable: 535956 kB' 'Slab: 880136 kB' 'SReclaimable: 535956 kB' 'SUnreclaim: 344180 kB' 'KernelStack: 16384 kB' 'PageTables: 8420 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486720 kB' 'Committed_AS: 13731668 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 201672 kB' 'VmallocChunk: 0 kB' 'Percpu: 67520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 791884 kB' 'DirectMap2M: 31389696 kB' 'DirectMap1G: 69206016 kB' 00:21:17.469 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.469 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.469 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:17.469 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.469 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.469 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.469 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:17.469 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.469 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.469 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.469 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:17.469 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.469 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.469 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.469 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:17.469 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.469 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.469 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.469 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:17.469 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.469 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.469 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.469 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:17.469 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.469 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.469 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.469 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:17.469 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.469 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.469 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.469 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:17.469 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.469 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.469 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.469 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:17.469 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.469 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.469 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.469 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:17.469 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.469 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.469 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.469 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:17.469 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.469 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.469 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.469 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:17.469 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.469 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.469 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.469 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:17.469 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.469 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.469 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.469 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:17.469 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.469 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.469 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.469 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:17.469 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.469 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.469 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.469 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:17.469 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.469 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.469 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.469 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:17.469 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.469 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.469 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.469 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:17.469 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.469 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.469 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.469 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:17.469 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.469 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.469 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.469 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:17.470 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.470 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.470 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.470 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:17.470 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.470 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.470 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.470 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:17.470 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.470 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.470 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.470 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:17.470 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.470 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.470 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.470 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:17.470 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.470 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.470 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.470 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:17.470 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.470 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.470 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.470 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:17.470 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.470 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.470 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.470 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:17.470 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.470 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.470 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.470 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:17.470 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.470 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.470 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.470 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:17.470 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.470 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.470 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.470 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:17.470 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.470 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.470 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.470 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:17.470 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.470 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.470 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.470 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:17.470 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.470 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.470 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.470 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:17.470 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.470 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.470 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.470 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:17.470 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.470 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.470 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.470 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:17.470 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.470 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.470 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.470 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:17.470 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.470 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.470 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.470 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:17.470 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.470 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.470 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.470 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:17.470 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.470 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.470 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.470 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:17.470 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.470 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.470 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.470 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:17.470 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.470 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.470 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.470 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:17.470 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.470 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.470 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.470 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:17.470 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.470 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.470 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.470 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:17.470 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.470 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.470 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.470 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:17.470 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.470 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.470 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.470 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:17.470 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.470 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.470 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.470 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:17.470 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.470 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.470 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.470 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:17.470 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.470 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.470 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.470 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:17.471 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.471 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.471 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.471 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:17.471 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.471 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.471 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.471 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:17.471 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.471 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.471 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.471 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:17.471 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:21:17.471 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:21:17.471 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:21:17.471 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:21:17.471 nr_hugepages=1024 00:21:17.471 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:21:17.471 resv_hugepages=0 00:21:17.471 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:21:17.471 surplus_hugepages=0 00:21:17.471 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:21:17.471 anon_hugepages=0 00:21:17.471 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:21:17.471 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:21:17.471 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:21:17.471 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:21:17.471 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:21:17.471 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:21:17.471 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:21:17.471 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:21:17.471 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:21:17.471 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:21:17.471 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:21:17.471 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:21:17.471 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.471 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.471 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293388 kB' 'MemFree: 71811576 kB' 'MemAvailable: 75801560 kB' 'Buffers: 9244 kB' 'Cached: 15958904 kB' 'SwapCached: 0 kB' 'Active: 12885736 kB' 'Inactive: 3739908 kB' 'Active(anon): 12335404 kB' 'Inactive(anon): 0 kB' 'Active(file): 550332 kB' 'Inactive(file): 3739908 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 660752 kB' 'Mapped: 191964 kB' 'Shmem: 11677908 kB' 'KReclaimable: 535956 kB' 'Slab: 880136 kB' 'SReclaimable: 535956 kB' 'SUnreclaim: 344180 kB' 'KernelStack: 16368 kB' 'PageTables: 8376 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486720 kB' 'Committed_AS: 13731692 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 201688 kB' 'VmallocChunk: 0 kB' 'Percpu: 67520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 791884 kB' 'DirectMap2M: 31389696 kB' 'DirectMap1G: 69206016 kB' 00:21:17.471 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:17.471 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.471 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.471 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.471 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:17.471 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.471 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.471 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.471 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:17.471 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.471 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.471 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.471 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:17.471 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.471 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.471 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.471 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:17.471 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.471 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.471 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.471 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:17.471 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.471 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.471 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.471 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:17.471 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.471 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.471 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.471 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:17.471 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.471 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.471 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.471 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:17.471 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.471 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.471 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.471 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:17.471 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.471 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.471 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.471 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:17.471 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.471 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.471 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.471 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:17.471 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.471 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.471 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.471 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:17.471 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.471 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.471 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.471 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:17.471 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.471 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.471 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.471 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:17.471 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.471 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.471 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.471 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:17.471 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.471 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.471 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.471 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:17.471 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.471 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.471 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.471 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:17.471 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.472 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.472 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.472 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:17.472 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.472 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.472 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.472 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:17.472 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.472 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.472 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.472 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:17.472 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.472 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.472 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.472 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:17.472 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.472 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.472 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.472 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:17.472 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.472 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.472 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.472 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:17.472 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.472 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.472 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.472 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:17.472 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.472 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.472 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.472 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:17.472 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.472 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.472 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.472 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:17.472 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.472 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.472 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.472 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:17.472 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.472 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.472 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.472 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:17.472 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.472 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.472 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.472 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:17.472 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.472 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.472 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.472 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:17.472 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.472 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.472 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.472 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:17.472 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.472 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.472 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.472 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:17.472 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.472 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.472 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.472 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:17.472 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.472 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.472 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.472 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:17.472 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.472 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.472 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.472 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:17.472 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.472 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.472 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.472 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:17.472 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.472 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.472 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.472 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:17.472 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.472 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.472 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.472 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:17.472 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.472 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.472 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.472 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:17.472 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.472 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.472 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.472 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:17.472 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.472 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.472 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.472 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:17.472 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.472 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.472 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.472 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:17.472 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.472 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.472 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.472 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:17.472 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.472 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.472 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.472 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:17.472 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.472 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.472 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.472 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:17.472 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.472 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.472 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.472 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:17.472 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.472 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.472 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.472 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:17.472 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.472 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.472 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.472 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:17.473 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:21:17.473 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:21:17.473 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:21:17.473 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:21:17.473 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:21:17.473 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:21:17.473 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:21:17.473 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:21:17.473 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:21:17.473 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:21:17.473 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:21:17.473 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:21:17.473 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:21:17.473 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:21:17.473 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:21:17.473 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:21:17.473 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:21:17.473 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:21:17.473 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:21:17.473 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:21:17.473 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:21:17.473 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:21:17.473 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:21:17.473 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.473 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.473 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48116852 kB' 'MemFree: 37193752 kB' 'MemUsed: 10923100 kB' 'SwapCached: 0 kB' 'Active: 8629464 kB' 'Inactive: 191208 kB' 'Active(anon): 8244256 kB' 'Inactive(anon): 0 kB' 'Active(file): 385208 kB' 'Inactive(file): 191208 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8367692 kB' 'Mapped: 134160 kB' 'AnonPages: 456188 kB' 'Shmem: 7791276 kB' 'KernelStack: 9864 kB' 'PageTables: 5768 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 324460 kB' 'Slab: 525728 kB' 'SReclaimable: 324460 kB' 'SUnreclaim: 201268 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:21:17.473 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:17.473 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.473 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.473 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.473 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:17.473 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.473 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.473 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.473 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:17.473 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.473 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.473 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.473 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:17.473 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.473 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.473 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.473 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:17.473 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.473 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.473 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.473 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:17.473 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.473 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.473 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.473 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:17.473 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.473 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.473 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.473 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:17.473 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.473 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.473 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.473 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:17.473 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.473 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.473 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.473 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:17.473 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.473 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.473 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.473 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:17.473 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.473 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.473 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.473 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:17.473 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.473 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.473 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.473 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:17.473 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.473 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.473 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.473 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:17.473 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.473 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.473 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.473 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:17.473 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.473 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.473 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.473 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:17.473 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.473 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.473 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.473 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:17.473 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.473 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.473 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.473 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:17.473 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.473 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.473 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.473 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:17.473 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.473 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.473 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.473 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:17.473 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.473 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.473 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.473 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:17.473 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.473 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.473 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.473 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:17.473 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.473 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.473 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.474 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:17.474 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.474 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.474 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.474 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:17.474 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.474 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.474 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.474 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:17.474 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.474 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.474 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.474 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:17.474 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.474 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.474 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.474 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:17.474 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.474 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.474 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.474 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:17.474 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.474 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.474 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.474 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:17.474 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.474 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.474 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.474 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:17.474 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.474 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.474 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.474 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:17.474 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.474 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.474 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.474 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:17.474 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.474 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.474 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.474 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:17.474 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.474 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.474 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.474 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:17.474 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.474 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.474 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.474 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:17.474 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.474 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.474 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.474 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:17.474 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.474 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.474 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.474 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:17.474 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:21:17.474 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:21:17.474 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:21:17.474 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:21:17.474 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:21:17.474 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:21:17.474 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:21:17.474 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:21:17.474 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:21:17.474 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:21:17.474 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:21:17.474 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:21:17.474 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:21:17.474 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:21:17.474 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:21:17.474 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.474 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.474 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44176536 kB' 'MemFree: 34617824 kB' 'MemUsed: 9558712 kB' 'SwapCached: 0 kB' 'Active: 4256272 kB' 'Inactive: 3548700 kB' 'Active(anon): 4091148 kB' 'Inactive(anon): 0 kB' 'Active(file): 165124 kB' 'Inactive(file): 3548700 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7600456 kB' 'Mapped: 57804 kB' 'AnonPages: 204564 kB' 'Shmem: 3886632 kB' 'KernelStack: 6504 kB' 'PageTables: 2608 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 211496 kB' 'Slab: 354408 kB' 'SReclaimable: 211496 kB' 'SUnreclaim: 142912 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:21:17.474 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:17.474 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.474 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.474 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.474 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:17.474 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.474 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.474 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.474 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:17.474 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.474 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.474 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.474 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:17.474 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.474 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.474 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.474 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:17.474 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.474 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.474 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.474 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:17.474 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.474 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.475 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.475 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:17.475 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.475 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.475 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.475 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:17.475 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.475 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.475 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.475 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:17.475 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.475 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.475 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.475 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:17.475 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.475 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.475 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.475 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:17.475 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.475 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.475 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.475 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:17.475 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.475 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.475 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.475 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:17.475 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.475 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.475 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.475 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:17.475 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.475 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.475 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.475 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:17.475 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.475 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.475 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.475 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:17.475 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.475 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.475 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.475 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:17.475 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.475 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.475 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.475 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:17.475 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.475 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.475 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.475 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:17.475 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.475 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.475 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.475 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:17.475 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.475 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.475 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.475 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:17.475 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.475 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.475 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.475 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:17.475 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.475 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.475 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.475 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:17.475 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.475 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.475 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.475 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:17.475 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.475 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.475 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.475 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:17.475 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.475 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.475 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.475 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:17.475 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.475 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.475 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.475 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:17.475 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.475 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.475 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.475 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:17.475 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.475 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.475 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.475 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:17.475 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.475 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.475 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.475 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:17.475 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.475 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.475 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.475 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:17.475 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.475 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.475 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.475 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:17.475 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.475 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.475 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.475 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:17.475 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.475 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.475 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.475 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:17.475 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.475 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.475 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.475 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:17.475 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.475 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.475 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.475 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:17.475 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:17.475 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:17.475 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:17.475 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:17.475 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:21:17.476 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:21:17.476 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:21:17.476 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:21:17.476 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:21:17.476 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:21:17.476 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:21:17.476 node0=512 expecting 512 00:21:17.476 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:21:17.476 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:21:17.476 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:21:17.476 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:21:17.476 node1=512 expecting 512 00:21:17.476 11:32:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:21:17.476 00:21:17.476 real 0m3.784s 00:21:17.476 user 0m1.412s 00:21:17.476 sys 0m2.463s 00:21:17.476 11:32:00 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:21:17.476 11:32:00 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:21:17.476 ************************************ 00:21:17.476 END TEST even_2G_alloc 00:21:17.476 ************************************ 00:21:17.476 11:32:01 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:21:17.476 11:32:01 setup.sh.hugepages -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:21:17.476 11:32:01 setup.sh.hugepages -- common/autotest_common.sh@1106 -- # xtrace_disable 00:21:17.476 11:32:01 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:21:17.476 ************************************ 00:21:17.476 START TEST odd_alloc 00:21:17.476 ************************************ 00:21:17.476 11:32:01 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # odd_alloc 00:21:17.476 11:32:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:21:17.476 11:32:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:21:17.476 11:32:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:21:17.476 11:32:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:21:17.476 11:32:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:21:17.476 11:32:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:21:17.476 11:32:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:21:17.476 11:32:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:21:17.476 11:32:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:21:17.476 11:32:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:21:17.476 11:32:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:21:17.476 11:32:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:21:17.476 11:32:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:21:17.476 11:32:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:21:17.476 11:32:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:21:17.476 11:32:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:21:17.476 11:32:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:21:17.476 11:32:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:21:17.476 11:32:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:21:17.476 11:32:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:21:17.476 11:32:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:21:17.476 11:32:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:21:17.476 11:32:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:21:17.476 11:32:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:21:17.476 11:32:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:21:17.476 11:32:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:21:17.476 11:32:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:21:17.476 11:32:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:21:20.766 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:21:20.766 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:21:20.766 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:21:20.766 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:21:20.766 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:21:20.766 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:21:20.766 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:21:20.766 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:21:20.766 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:21:20.766 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:21:20.766 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:21:20.766 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:21:20.766 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:21:20.766 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:21:20.766 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:21:20.766 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:21:20.766 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:21:20.766 11:32:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:21:20.766 11:32:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:21:20.766 11:32:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:21:20.766 11:32:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:21:20.766 11:32:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:21:20.766 11:32:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:21:20.766 11:32:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:21:20.766 11:32:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:21:20.766 11:32:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:21:20.766 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:21:20.766 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:21:20.766 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:21:20.766 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:21:20.766 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:21:20.766 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:21:20.766 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:21:20.766 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:21:20.766 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:21:20.766 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.766 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293388 kB' 'MemFree: 71827812 kB' 'MemAvailable: 75817796 kB' 'Buffers: 9244 kB' 'Cached: 15959012 kB' 'SwapCached: 0 kB' 'Active: 12886776 kB' 'Inactive: 3739908 kB' 'Active(anon): 12336444 kB' 'Inactive(anon): 0 kB' 'Active(file): 550332 kB' 'Inactive(file): 3739908 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 661576 kB' 'Mapped: 191988 kB' 'Shmem: 11678016 kB' 'KReclaimable: 535956 kB' 'Slab: 880076 kB' 'SReclaimable: 535956 kB' 'SUnreclaim: 344120 kB' 'KernelStack: 16528 kB' 'PageTables: 8972 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53485696 kB' 'Committed_AS: 13733372 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 201960 kB' 'VmallocChunk: 0 kB' 'Percpu: 67520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 791884 kB' 'DirectMap2M: 31389696 kB' 'DirectMap1G: 69206016 kB' 00:21:20.766 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.766 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:20.766 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.766 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.766 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.766 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:20.766 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.766 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.766 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.766 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:20.766 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.766 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.766 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.766 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:20.766 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.766 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.766 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.766 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:20.766 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.766 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.766 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.766 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:20.766 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.766 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.766 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.766 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:20.766 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.766 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.766 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.766 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:20.766 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.766 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.766 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.766 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:20.766 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.766 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.766 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.766 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:20.766 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.766 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.766 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.766 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:20.766 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.766 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.767 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.767 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:20.767 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.767 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.767 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.767 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:20.767 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.767 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.767 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.767 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:20.767 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.767 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.767 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.767 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:20.767 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.767 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.767 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.767 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:20.767 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.767 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.767 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.767 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:20.767 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.767 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.767 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.767 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:20.767 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.767 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.767 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.767 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:20.767 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.767 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.767 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.767 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:20.767 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.767 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.767 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.767 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:20.767 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.767 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.767 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.767 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:20.767 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.767 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.767 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.767 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:20.767 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.767 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.767 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.767 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:20.767 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.767 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.767 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.767 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:20.767 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.767 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.767 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.767 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:20.767 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.767 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.767 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.767 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:20.767 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.767 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.767 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.767 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:20.767 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.767 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.767 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.767 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:20.767 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.767 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.767 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.767 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:20.767 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.767 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.767 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.767 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:20.767 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.767 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.767 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.767 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:20.767 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.767 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.767 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.767 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:20.767 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.767 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.767 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.767 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:20.767 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.767 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.767 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.767 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:20.767 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.767 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.767 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.767 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:20.767 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.767 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.767 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.767 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:20.767 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.767 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.767 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.767 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:20.767 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.767 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.767 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.767 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:20.767 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.767 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.767 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.767 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:20.767 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.767 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.767 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.767 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:20.767 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:21:20.767 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:21:20.767 11:32:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:21:20.767 11:32:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:21:20.767 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:21:20.767 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:21:20.767 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:21:20.767 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:21:20.767 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:21:20.767 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:21:20.767 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:21:20.767 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:21:20.767 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:21:20.767 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.767 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.767 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293388 kB' 'MemFree: 71832428 kB' 'MemAvailable: 75822412 kB' 'Buffers: 9244 kB' 'Cached: 15959012 kB' 'SwapCached: 0 kB' 'Active: 12886516 kB' 'Inactive: 3739908 kB' 'Active(anon): 12336184 kB' 'Inactive(anon): 0 kB' 'Active(file): 550332 kB' 'Inactive(file): 3739908 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 661368 kB' 'Mapped: 191984 kB' 'Shmem: 11678016 kB' 'KReclaimable: 535956 kB' 'Slab: 880176 kB' 'SReclaimable: 535956 kB' 'SUnreclaim: 344220 kB' 'KernelStack: 16512 kB' 'PageTables: 8408 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53485696 kB' 'Committed_AS: 13734748 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 201912 kB' 'VmallocChunk: 0 kB' 'Percpu: 67520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 791884 kB' 'DirectMap2M: 31389696 kB' 'DirectMap1G: 69206016 kB' 00:21:20.767 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:20.767 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.767 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.767 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.767 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:20.767 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.767 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.767 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.767 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:20.767 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.767 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.767 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.767 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:20.767 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.767 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.767 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.767 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:20.767 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.767 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.767 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.767 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:20.767 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.767 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.767 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.767 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:20.767 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.767 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.767 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.767 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:20.767 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.767 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.767 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.767 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:20.767 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.767 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.767 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.767 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:20.767 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.767 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.767 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.767 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:20.767 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.767 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.767 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.767 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:20.767 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.767 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.767 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.767 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:20.767 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.767 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.767 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.767 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:20.767 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.767 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.767 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.767 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:20.767 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.767 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.767 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.767 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:20.767 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.768 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.768 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.768 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:20.768 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.768 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.768 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.768 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:20.768 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.768 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.768 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.768 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:20.768 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.768 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.768 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.768 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:20.768 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.768 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.768 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.768 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:20.768 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.768 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.768 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.768 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:20.768 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.768 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.768 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.768 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:20.768 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.768 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.768 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.768 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:20.768 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.768 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.768 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.768 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:20.768 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.768 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.768 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.768 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:20.768 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.768 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.768 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.768 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:20.768 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.768 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.768 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.768 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:20.768 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.768 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.768 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.768 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:20.768 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.768 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.768 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.768 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:20.768 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.768 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.768 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.768 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:20.768 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.768 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.768 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.768 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:20.768 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.768 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.768 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.768 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:20.768 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.768 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.768 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.768 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:20.768 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.768 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.768 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.768 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:20.768 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.768 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.768 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.768 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:20.768 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.768 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.768 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.768 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:20.768 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.768 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.768 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.768 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:20.768 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.768 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.768 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.768 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:20.768 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.768 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.768 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.768 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:20.768 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.768 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.768 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.768 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:20.768 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.768 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.768 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.768 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:20.768 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.768 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.768 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.768 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:20.768 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.768 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.768 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.768 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:20.768 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.768 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.768 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.768 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:20.768 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.768 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.768 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.768 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:20.768 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.768 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.768 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.768 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:20.768 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.768 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.768 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.768 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:20.768 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.768 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.768 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.768 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:20.768 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.768 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.768 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.768 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:20.768 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.768 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.768 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.768 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:20.768 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.768 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.768 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.768 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:20.768 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:21:20.768 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:21:20.768 11:32:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:21:20.768 11:32:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:21:20.768 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:21:20.768 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:21:20.768 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:21:20.768 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:21:20.768 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:21:20.768 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:21:20.768 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:21:20.768 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:21:20.768 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:21:20.768 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.768 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.769 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293388 kB' 'MemFree: 71830656 kB' 'MemAvailable: 75820640 kB' 'Buffers: 9244 kB' 'Cached: 15959032 kB' 'SwapCached: 0 kB' 'Active: 12886996 kB' 'Inactive: 3739908 kB' 'Active(anon): 12336664 kB' 'Inactive(anon): 0 kB' 'Active(file): 550332 kB' 'Inactive(file): 3739908 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 661868 kB' 'Mapped: 191984 kB' 'Shmem: 11678036 kB' 'KReclaimable: 535956 kB' 'Slab: 880176 kB' 'SReclaimable: 535956 kB' 'SUnreclaim: 344220 kB' 'KernelStack: 16528 kB' 'PageTables: 8756 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53485696 kB' 'Committed_AS: 13734768 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 201928 kB' 'VmallocChunk: 0 kB' 'Percpu: 67520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 791884 kB' 'DirectMap2M: 31389696 kB' 'DirectMap1G: 69206016 kB' 00:21:20.769 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:20.769 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.769 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.769 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.769 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:20.769 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.769 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.769 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.769 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:20.769 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.769 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.769 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.769 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:20.769 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.769 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.769 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.769 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:20.769 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.769 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.769 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.769 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:20.769 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.769 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.769 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.769 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:20.769 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.769 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.769 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.769 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:20.769 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.769 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.769 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.769 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:20.769 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.769 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.769 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.769 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:20.769 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.769 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.769 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.769 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:20.769 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.769 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.769 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.769 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:20.769 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.769 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.769 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.769 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:20.769 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.769 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.769 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.769 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:20.769 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.769 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.769 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.769 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:20.769 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.769 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.769 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.769 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:20.769 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.769 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.769 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.769 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:20.769 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.769 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.769 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.769 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:20.769 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.769 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.769 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.769 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:20.769 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.769 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.769 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.769 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:20.769 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.769 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.769 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.769 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:20.769 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.769 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.769 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.769 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:20.769 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.769 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.769 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.769 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:20.769 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.769 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.769 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.769 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:20.769 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.769 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.769 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.769 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:20.769 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.769 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.769 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.769 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:20.769 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.769 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.769 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.769 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:20.769 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.769 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.769 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.769 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:20.769 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.770 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.770 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.770 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:20.770 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.770 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.770 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.770 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:20.770 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.770 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.770 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.770 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:20.770 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.770 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.770 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.770 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:20.770 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.770 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.770 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.770 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:20.770 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.770 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.770 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.770 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:20.770 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.770 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.770 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.770 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:20.770 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.770 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.770 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.770 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:20.770 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.770 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.770 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.770 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:20.770 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.770 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.770 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.770 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:20.770 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.770 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.770 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.770 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:20.770 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.770 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.770 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.770 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:20.770 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.770 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.770 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.770 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:20.770 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.770 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.770 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.770 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:20.770 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.770 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.770 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.770 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:20.770 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.770 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.770 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.770 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:20.770 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.770 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.770 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.770 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:20.770 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.770 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.770 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.770 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:20.770 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.770 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.770 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.770 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:20.770 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.770 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.770 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.770 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:20.770 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.770 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.770 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.770 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:20.770 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.770 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.770 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.770 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:20.770 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.770 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.770 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.770 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:20.770 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:21:20.770 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:21:20.770 11:32:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:21:20.770 11:32:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:21:20.770 nr_hugepages=1025 00:21:20.770 11:32:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:21:20.770 resv_hugepages=0 00:21:20.770 11:32:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:21:20.770 surplus_hugepages=0 00:21:20.770 11:32:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:21:20.770 anon_hugepages=0 00:21:20.770 11:32:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:21:20.770 11:32:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:21:20.770 11:32:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:21:20.770 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:21:20.770 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:21:20.770 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:21:20.770 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:21:20.770 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:21:20.770 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:21:20.770 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:21:20.770 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:21:20.770 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:21:20.770 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.770 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.770 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293388 kB' 'MemFree: 71831012 kB' 'MemAvailable: 75820996 kB' 'Buffers: 9244 kB' 'Cached: 15959052 kB' 'SwapCached: 0 kB' 'Active: 12887028 kB' 'Inactive: 3739908 kB' 'Active(anon): 12336696 kB' 'Inactive(anon): 0 kB' 'Active(file): 550332 kB' 'Inactive(file): 3739908 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 661848 kB' 'Mapped: 191984 kB' 'Shmem: 11678056 kB' 'KReclaimable: 535956 kB' 'Slab: 880176 kB' 'SReclaimable: 535956 kB' 'SUnreclaim: 344220 kB' 'KernelStack: 16576 kB' 'PageTables: 9192 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53485696 kB' 'Committed_AS: 13734788 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 201912 kB' 'VmallocChunk: 0 kB' 'Percpu: 67520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 791884 kB' 'DirectMap2M: 31389696 kB' 'DirectMap1G: 69206016 kB' 00:21:20.770 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:20.770 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.770 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.770 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.770 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:20.770 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.770 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.770 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.770 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:20.770 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.770 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.770 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.770 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:20.770 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.770 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.770 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.770 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:20.770 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.770 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.770 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.770 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:20.770 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.770 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.770 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.770 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:20.770 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.770 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.770 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.770 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:20.770 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.770 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.770 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.770 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:20.770 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.770 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.770 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.770 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:20.770 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.770 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.770 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.770 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:20.770 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.770 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.770 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.770 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:20.770 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.770 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.770 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.770 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:20.770 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.770 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.770 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.770 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:20.770 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.770 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.770 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.770 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:20.770 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.770 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.770 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.770 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:20.770 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.770 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.771 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.771 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:20.771 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.771 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.771 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.771 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:20.771 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.771 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.771 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.771 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:20.771 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.771 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.771 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.771 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:20.771 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.771 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.771 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.771 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:20.771 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.771 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.771 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.771 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:20.771 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.771 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.771 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.771 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:20.771 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.771 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.771 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.771 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:20.771 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.771 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.771 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.771 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:20.771 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.771 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.771 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.771 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:20.771 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.771 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.771 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.771 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:20.771 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.771 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.771 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.771 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:20.771 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.771 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.771 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.771 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:20.771 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.771 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.771 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.771 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:20.771 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.771 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.771 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.771 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:20.771 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.771 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.771 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.771 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:20.771 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.771 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.771 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.771 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:20.771 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.771 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.771 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.771 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:20.771 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.771 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.771 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.771 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:20.771 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.771 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.771 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.771 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:20.771 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.771 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.771 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.771 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:20.771 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.771 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.771 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.771 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:20.771 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.771 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.771 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.771 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:20.771 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.771 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.771 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.771 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:20.771 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.771 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.771 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.771 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:20.771 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.771 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.771 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.771 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:20.771 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.771 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.771 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.771 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:20.771 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.771 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.771 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.771 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:20.771 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.771 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.771 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.771 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:20.771 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.771 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.771 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.771 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:20.771 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.771 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.771 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.771 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:20.771 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.771 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.771 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.771 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:20.771 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.771 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.771 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.771 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:20.771 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:21:20.771 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:21:20.771 11:32:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:21:20.771 11:32:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:21:20.771 11:32:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:21:20.771 11:32:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:21:20.771 11:32:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:21:20.771 11:32:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:21:20.771 11:32:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:21:20.771 11:32:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:21:20.771 11:32:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:21:20.771 11:32:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:21:20.771 11:32:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:21:20.771 11:32:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:21:20.771 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:21:20.771 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:21:20.771 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:21:20.771 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:21:20.771 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:21:20.771 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:21:20.771 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:21:20.771 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:21:20.771 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:21:20.771 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.771 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.771 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48116852 kB' 'MemFree: 37201104 kB' 'MemUsed: 10915748 kB' 'SwapCached: 0 kB' 'Active: 8629684 kB' 'Inactive: 191208 kB' 'Active(anon): 8244476 kB' 'Inactive(anon): 0 kB' 'Active(file): 385208 kB' 'Inactive(file): 191208 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8367836 kB' 'Mapped: 134160 kB' 'AnonPages: 456216 kB' 'Shmem: 7791420 kB' 'KernelStack: 9896 kB' 'PageTables: 5896 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 324460 kB' 'Slab: 525636 kB' 'SReclaimable: 324460 kB' 'SUnreclaim: 201176 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:21:20.771 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:20.771 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.771 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.771 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.771 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:20.771 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.771 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.771 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.771 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:20.772 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.772 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.772 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.772 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:20.772 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.772 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.772 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.772 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:20.772 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.772 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.772 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.772 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:20.772 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.772 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.772 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.772 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:20.772 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.772 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.772 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.772 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:20.772 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.772 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.772 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.772 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:20.772 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.772 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.772 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.772 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:20.772 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.772 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.772 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.772 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:20.772 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.772 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.772 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.772 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:20.772 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.772 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.772 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.772 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:20.772 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.772 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.772 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.772 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:20.772 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.772 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.772 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.772 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:20.772 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.772 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.772 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.772 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:20.772 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.772 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.772 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.772 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:20.772 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.772 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.772 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.772 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:20.772 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.772 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.772 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.772 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:20.772 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.772 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.772 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.772 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:20.772 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.772 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.772 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.772 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:20.772 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.772 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.772 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.772 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:20.772 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.772 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.772 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.772 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:20.772 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.772 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.772 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.772 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:20.772 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.772 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.772 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.772 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:20.772 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.772 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.772 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.772 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:20.772 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.772 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.772 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.772 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:20.772 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.772 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.772 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.772 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:20.772 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.772 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.772 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.772 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:20.772 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.772 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.772 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.772 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:20.772 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.772 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.772 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.772 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:20.772 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.772 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.772 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.772 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:20.772 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.772 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.772 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.772 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:20.772 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.772 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.772 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.772 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:20.772 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.772 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.772 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.772 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:20.772 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.772 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.772 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.772 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:20.772 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.772 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.772 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.772 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:20.772 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:21:20.772 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:21:20.772 11:32:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:21:20.772 11:32:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:21:20.772 11:32:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:21:20.772 11:32:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:21:20.772 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:21:20.772 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:21:20.772 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:21:20.772 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:21:20.772 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:21:20.772 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:21:20.772 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:21:20.772 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:21:20.772 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:21:20.772 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.772 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.772 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44176536 kB' 'MemFree: 34631320 kB' 'MemUsed: 9545216 kB' 'SwapCached: 0 kB' 'Active: 4256732 kB' 'Inactive: 3548700 kB' 'Active(anon): 4091608 kB' 'Inactive(anon): 0 kB' 'Active(file): 165124 kB' 'Inactive(file): 3548700 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7600484 kB' 'Mapped: 57812 kB' 'AnonPages: 205028 kB' 'Shmem: 3886660 kB' 'KernelStack: 6520 kB' 'PageTables: 2600 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 211496 kB' 'Slab: 354540 kB' 'SReclaimable: 211496 kB' 'SUnreclaim: 143044 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:21:20.772 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:20.772 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.772 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.772 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.772 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:20.772 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.772 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.772 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.772 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:20.772 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.772 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.772 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.772 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:20.772 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.772 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.772 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.772 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:20.772 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.772 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.773 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.773 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:20.773 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.773 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.773 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.773 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:20.773 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.773 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.773 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.773 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:20.773 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.773 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.773 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.773 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:20.773 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.773 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.773 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.773 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:20.773 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.773 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.773 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.773 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:20.773 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.773 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.773 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.773 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:20.773 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.773 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.773 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.773 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:20.773 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.773 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.773 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.773 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:20.773 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.773 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.773 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.773 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:20.773 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.773 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.773 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.773 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:20.773 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.773 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.773 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.773 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:20.773 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.773 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.773 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.773 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:20.773 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.773 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.773 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.773 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:20.773 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.773 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.773 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.773 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:20.773 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.773 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.773 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.773 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:20.773 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.773 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.773 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.773 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:20.773 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.773 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.773 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.773 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:20.774 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.774 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.774 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.774 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:20.774 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.774 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.774 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.774 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:20.774 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.774 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.774 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.774 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:20.774 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.774 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.774 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.774 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:20.774 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.774 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.774 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.774 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:20.774 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.774 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.774 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.774 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:20.774 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.774 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.774 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.774 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:20.774 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.774 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.774 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.774 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:20.774 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.774 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.774 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.774 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:20.774 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.774 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.774 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.774 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:20.774 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.774 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.774 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.774 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:20.774 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.774 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.774 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.774 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:20.774 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.774 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.774 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.774 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:20.774 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:20.774 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:20.774 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:20.774 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:20.774 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:21:20.774 11:32:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:21:20.774 11:32:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:21:20.774 11:32:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:21:20.774 11:32:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:21:20.774 11:32:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:21:20.774 11:32:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:21:20.774 node0=512 expecting 513 00:21:20.774 11:32:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:21:20.774 11:32:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:21:20.774 11:32:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:21:20.774 11:32:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:21:20.774 node1=513 expecting 512 00:21:20.774 11:32:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:21:20.774 00:21:20.774 real 0m3.555s 00:21:20.774 user 0m1.396s 00:21:20.774 sys 0m2.247s 00:21:20.774 11:32:04 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:21:20.774 11:32:04 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:21:20.774 ************************************ 00:21:20.774 END TEST odd_alloc 00:21:20.774 ************************************ 00:21:20.774 11:32:04 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:21:20.774 11:32:04 setup.sh.hugepages -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:21:20.774 11:32:04 setup.sh.hugepages -- common/autotest_common.sh@1106 -- # xtrace_disable 00:21:20.774 11:32:04 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:21:20.774 ************************************ 00:21:20.774 START TEST custom_alloc 00:21:20.774 ************************************ 00:21:20.774 11:32:04 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # custom_alloc 00:21:20.774 11:32:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:21:20.774 11:32:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:21:20.774 11:32:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:21:20.774 11:32:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:21:20.774 11:32:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:21:20.774 11:32:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:21:20.774 11:32:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:21:20.774 11:32:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:21:20.774 11:32:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:21:20.774 11:32:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:21:20.774 11:32:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:21:20.774 11:32:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:21:20.774 11:32:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:21:20.774 11:32:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:21:20.774 11:32:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:21:20.774 11:32:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:21:20.774 11:32:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:21:20.774 11:32:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:21:20.774 11:32:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:21:20.774 11:32:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:21:20.774 11:32:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:21:20.774 11:32:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:21:20.774 11:32:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:21:20.774 11:32:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:21:20.774 11:32:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:21:20.774 11:32:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:21:20.774 11:32:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:21:20.774 11:32:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:21:20.774 11:32:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:21:20.774 11:32:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:21:20.774 11:32:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:21:20.774 11:32:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:21:20.774 11:32:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:21:20.774 11:32:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:21:20.774 11:32:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:21:20.774 11:32:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:21:20.774 11:32:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:21:20.774 11:32:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:21:20.774 11:32:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:21:20.774 11:32:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:21:20.774 11:32:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:21:20.774 11:32:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:21:20.774 11:32:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:21:20.774 11:32:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:21:20.774 11:32:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:21:20.774 11:32:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:21:20.774 11:32:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:21:20.774 11:32:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:21:20.774 11:32:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:21:20.774 11:32:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:21:20.774 11:32:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:21:20.774 11:32:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:21:20.774 11:32:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:21:20.774 11:32:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:21:20.774 11:32:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:21:20.774 11:32:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:21:20.774 11:32:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:21:20.774 11:32:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:21:20.774 11:32:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:21:20.774 11:32:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:21:20.774 11:32:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:21:20.774 11:32:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:21:20.774 11:32:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:21:20.774 11:32:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:21:20.774 11:32:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:21:20.774 11:32:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:21:20.774 11:32:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:21:20.774 11:32:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:21:20.774 11:32:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:21:20.774 11:32:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:21:20.774 11:32:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:21:20.774 11:32:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:21:25.104 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:21:25.104 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:21:25.104 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:21:25.104 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:21:25.104 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:21:25.104 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:21:25.104 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:21:25.104 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:21:25.104 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:21:25.104 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:21:25.104 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:21:25.104 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:21:25.104 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:21:25.104 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:21:25.104 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:21:25.104 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:21:25.104 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:21:25.104 11:32:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:21:25.105 11:32:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:21:25.105 11:32:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:21:25.105 11:32:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:21:25.105 11:32:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:21:25.105 11:32:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:21:25.105 11:32:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:21:25.105 11:32:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:21:25.105 11:32:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:21:25.105 11:32:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:21:25.105 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:21:25.105 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:21:25.105 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:21:25.105 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:21:25.105 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:21:25.105 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:21:25.105 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:21:25.105 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:21:25.105 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:21:25.105 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.105 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.105 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293388 kB' 'MemFree: 70822260 kB' 'MemAvailable: 74812244 kB' 'Buffers: 9244 kB' 'Cached: 15959164 kB' 'SwapCached: 0 kB' 'Active: 12888248 kB' 'Inactive: 3739908 kB' 'Active(anon): 12337916 kB' 'Inactive(anon): 0 kB' 'Active(file): 550332 kB' 'Inactive(file): 3739908 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 662480 kB' 'Mapped: 192076 kB' 'Shmem: 11678168 kB' 'KReclaimable: 535956 kB' 'Slab: 880140 kB' 'SReclaimable: 535956 kB' 'SUnreclaim: 344184 kB' 'KernelStack: 16656 kB' 'PageTables: 8884 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 52962432 kB' 'Committed_AS: 13735400 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 201912 kB' 'VmallocChunk: 0 kB' 'Percpu: 67520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 791884 kB' 'DirectMap2M: 31389696 kB' 'DirectMap1G: 69206016 kB' 00:21:25.105 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:25.105 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.105 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.105 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.105 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:25.105 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.105 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.105 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.105 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:25.105 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.105 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.105 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.105 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:25.105 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.105 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.105 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.105 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:25.105 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.105 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.105 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.105 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:25.105 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.105 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.105 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.105 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:25.105 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.105 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.105 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.105 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:25.105 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.105 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.105 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.105 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:25.105 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.105 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.105 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.105 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:25.105 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.105 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.105 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.105 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:25.105 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.105 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.105 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.105 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:25.105 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.105 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.105 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.105 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:25.105 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.105 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.105 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.105 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:25.105 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.105 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.105 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.105 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:25.105 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.105 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.105 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.105 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:25.105 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.105 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.105 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.105 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:25.105 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.105 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.105 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.105 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:25.105 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.105 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.105 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.105 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:25.105 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.105 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.105 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.105 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:25.105 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.105 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.105 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.105 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:25.105 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.105 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.105 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.105 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:25.105 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.106 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.106 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.106 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:25.106 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.106 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.106 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.106 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:25.106 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.106 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.106 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.106 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:25.106 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.106 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.106 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.106 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:25.106 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.106 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.106 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.106 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:25.106 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.106 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.106 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.106 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:25.106 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.106 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.106 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.106 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:25.106 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.106 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.106 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.106 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:25.106 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.106 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.106 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.106 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:25.106 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.106 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.106 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.106 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:25.106 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.106 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.106 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.106 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:25.106 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.106 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.106 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.106 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:25.106 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.106 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.106 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.106 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:25.106 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.106 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.106 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.106 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:25.106 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.106 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.106 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.106 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:25.106 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.106 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.106 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.106 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:25.106 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.106 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.106 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.106 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:25.106 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.106 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.106 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.106 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:25.106 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.106 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.106 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.106 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:25.106 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:21:25.106 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:21:25.106 11:32:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:21:25.106 11:32:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:21:25.106 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:21:25.106 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:21:25.106 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:21:25.106 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:21:25.106 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:21:25.106 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:21:25.106 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:21:25.106 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:21:25.106 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:21:25.106 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293388 kB' 'MemFree: 70822160 kB' 'MemAvailable: 74812144 kB' 'Buffers: 9244 kB' 'Cached: 15959168 kB' 'SwapCached: 0 kB' 'Active: 12887988 kB' 'Inactive: 3739908 kB' 'Active(anon): 12337656 kB' 'Inactive(anon): 0 kB' 'Active(file): 550332 kB' 'Inactive(file): 3739908 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 662280 kB' 'Mapped: 192072 kB' 'Shmem: 11678172 kB' 'KReclaimable: 535956 kB' 'Slab: 880300 kB' 'SReclaimable: 535956 kB' 'SUnreclaim: 344344 kB' 'KernelStack: 16512 kB' 'PageTables: 8808 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 52962432 kB' 'Committed_AS: 13735416 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 201848 kB' 'VmallocChunk: 0 kB' 'Percpu: 67520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 791884 kB' 'DirectMap2M: 31389696 kB' 'DirectMap1G: 69206016 kB' 00:21:25.106 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.106 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.106 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:25.106 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.107 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.107 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.107 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:25.107 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.107 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.107 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.107 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:25.107 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.107 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.107 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.107 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:25.107 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.107 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.107 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.107 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:25.107 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.107 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.107 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.107 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:25.107 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.107 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.107 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.107 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:25.107 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.107 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.107 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.107 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:25.107 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.107 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.107 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.107 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:25.107 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.107 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.107 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.107 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:25.107 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.107 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.107 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.107 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:25.107 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.107 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.107 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.107 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:25.107 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.107 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.107 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.107 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:25.107 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.107 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.107 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.107 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:25.107 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.107 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.107 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.107 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:25.107 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.107 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.107 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.107 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:25.107 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.107 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.107 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.107 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:25.107 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.107 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.107 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.107 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:25.107 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.107 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.107 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.107 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:25.107 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.107 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.107 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.107 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:25.107 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.107 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.107 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.107 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:25.107 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.107 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.107 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.107 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:25.107 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.107 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.107 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.107 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:25.107 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.107 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.108 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.108 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:25.108 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.108 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.108 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.108 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:25.108 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.108 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.108 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.108 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:25.108 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.108 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.108 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.108 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:25.108 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.108 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.108 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.108 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:25.108 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.108 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.108 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.108 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:25.108 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.108 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.108 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.108 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:25.108 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.108 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.108 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.108 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:25.108 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.108 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.108 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.108 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:25.108 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.108 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.108 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.108 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:25.108 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.108 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.108 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.108 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:25.108 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.108 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.108 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.108 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:25.108 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.108 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.108 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.108 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:25.108 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.108 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.108 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.108 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:25.108 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.108 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.108 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.108 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:25.108 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.108 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.108 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.108 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:25.108 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.108 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.108 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.108 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:25.108 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.108 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.108 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.108 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:25.108 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.108 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.108 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.108 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:25.108 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.108 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.108 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.109 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:25.109 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.109 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.109 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.109 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:25.109 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.109 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.109 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.109 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:25.109 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.109 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.109 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.109 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:25.109 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.109 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.109 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.109 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:25.109 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.109 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.109 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.109 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:25.109 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.109 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.109 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.109 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:25.109 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.109 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.109 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.109 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:25.109 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.109 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.109 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.109 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:25.109 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.109 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.109 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.109 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:25.109 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:21:25.109 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:21:25.109 11:32:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:21:25.109 11:32:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:21:25.109 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:21:25.109 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:21:25.109 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:21:25.109 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:21:25.109 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:21:25.109 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:21:25.109 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:21:25.109 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:21:25.109 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:21:25.109 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.109 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.109 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293388 kB' 'MemFree: 70820952 kB' 'MemAvailable: 74810936 kB' 'Buffers: 9244 kB' 'Cached: 15959184 kB' 'SwapCached: 0 kB' 'Active: 12887248 kB' 'Inactive: 3739908 kB' 'Active(anon): 12336916 kB' 'Inactive(anon): 0 kB' 'Active(file): 550332 kB' 'Inactive(file): 3739908 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 661924 kB' 'Mapped: 191996 kB' 'Shmem: 11678188 kB' 'KReclaimable: 535956 kB' 'Slab: 880228 kB' 'SReclaimable: 535956 kB' 'SUnreclaim: 344272 kB' 'KernelStack: 16448 kB' 'PageTables: 8928 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 52962432 kB' 'Committed_AS: 13735436 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 201864 kB' 'VmallocChunk: 0 kB' 'Percpu: 67520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 791884 kB' 'DirectMap2M: 31389696 kB' 'DirectMap1G: 69206016 kB' 00:21:25.109 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:25.109 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.109 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.109 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.109 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:25.109 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.109 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.109 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.109 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:25.109 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.109 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.109 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.109 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:25.109 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.110 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.110 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.110 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:25.110 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.110 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.110 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.110 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:25.110 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.110 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.110 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.110 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:25.110 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.110 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.110 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.110 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:25.110 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.110 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.110 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.110 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:25.110 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.110 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.110 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.110 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:25.110 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.110 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.110 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.110 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:25.110 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.110 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.110 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.110 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:25.110 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.110 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.110 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.110 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:25.110 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.110 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.110 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.110 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:25.110 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.110 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.110 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.110 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:25.110 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.110 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.110 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.110 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:25.110 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.110 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.110 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.110 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:25.110 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.110 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.110 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.110 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:25.110 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.110 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.110 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.110 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:25.110 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.110 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.110 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.110 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:25.110 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.110 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.110 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.110 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:25.111 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.111 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.111 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.111 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:25.111 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.111 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.111 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.111 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:25.111 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.111 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.111 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.111 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:25.111 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.111 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.111 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.111 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:25.111 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.111 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.111 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.111 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:25.111 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.111 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.111 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.111 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:25.111 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.111 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.111 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.111 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:25.111 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.111 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.111 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.111 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:25.111 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.111 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.111 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.111 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:25.111 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.111 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.111 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.111 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:25.111 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.111 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.111 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.111 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:25.111 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.111 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.111 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.111 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:25.111 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.111 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.111 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.111 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:25.111 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.111 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.111 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.111 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:25.111 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.111 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.111 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.111 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:25.111 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.111 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.111 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.111 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:25.111 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.111 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.112 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.112 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:25.112 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.112 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.112 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.112 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:25.112 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.112 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.112 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.112 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:25.112 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.112 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.112 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.112 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:25.112 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.112 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.112 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.112 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:25.112 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.112 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.112 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.112 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:25.112 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.112 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.112 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.112 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:25.112 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.112 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.112 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.112 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:25.112 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.112 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.112 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.112 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:25.112 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.112 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.112 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.112 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:25.112 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.112 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.112 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.112 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:25.112 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.112 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.112 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.112 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:25.112 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.112 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.112 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.112 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:25.112 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.112 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.112 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.112 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:25.112 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:21:25.112 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:21:25.112 11:32:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:21:25.112 11:32:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:21:25.112 nr_hugepages=1536 00:21:25.112 11:32:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:21:25.112 resv_hugepages=0 00:21:25.112 11:32:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:21:25.112 surplus_hugepages=0 00:21:25.112 11:32:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:21:25.112 anon_hugepages=0 00:21:25.112 11:32:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:21:25.112 11:32:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:21:25.112 11:32:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:21:25.112 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:21:25.112 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:21:25.112 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:21:25.112 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:21:25.112 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:21:25.112 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:21:25.112 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:21:25.112 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:21:25.112 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:21:25.112 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.112 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.113 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293388 kB' 'MemFree: 70819796 kB' 'MemAvailable: 74809780 kB' 'Buffers: 9244 kB' 'Cached: 15959208 kB' 'SwapCached: 0 kB' 'Active: 12887640 kB' 'Inactive: 3739908 kB' 'Active(anon): 12337308 kB' 'Inactive(anon): 0 kB' 'Active(file): 550332 kB' 'Inactive(file): 3739908 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 662336 kB' 'Mapped: 191996 kB' 'Shmem: 11678212 kB' 'KReclaimable: 535956 kB' 'Slab: 880228 kB' 'SReclaimable: 535956 kB' 'SUnreclaim: 344272 kB' 'KernelStack: 16576 kB' 'PageTables: 8728 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 52962432 kB' 'Committed_AS: 13735460 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 201864 kB' 'VmallocChunk: 0 kB' 'Percpu: 67520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 791884 kB' 'DirectMap2M: 31389696 kB' 'DirectMap1G: 69206016 kB' 00:21:25.113 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:25.113 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.113 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.113 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.113 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:25.113 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.113 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.113 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.113 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:25.113 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.113 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.113 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.113 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:25.113 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.113 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.113 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.113 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:25.113 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.113 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.113 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.113 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:25.113 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.113 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.113 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.113 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:25.113 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.113 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.113 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.113 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:25.113 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.113 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.113 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.113 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:25.113 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.113 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.113 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.113 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:25.113 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.113 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.113 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.113 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:25.113 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.113 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.113 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.113 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:25.113 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.113 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.113 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.113 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:25.113 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.113 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.113 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.113 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:25.113 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.113 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.113 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.113 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:25.113 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.113 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.113 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.113 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:25.113 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.113 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.113 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.113 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:25.113 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.113 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.113 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.113 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:25.113 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.113 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.113 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.113 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:25.113 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.113 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.113 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.114 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:25.114 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.114 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.114 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.114 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:25.114 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.114 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.114 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.114 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:25.114 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.114 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.114 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.114 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:25.114 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.114 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.114 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.114 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:25.114 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.114 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.114 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.114 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:25.114 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.114 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.114 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.114 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:25.114 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.114 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.114 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.114 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:25.114 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.114 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.114 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.114 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:25.114 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.114 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.114 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.114 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:25.114 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.114 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.114 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.114 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:25.114 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.114 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.114 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.114 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:25.114 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.114 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.114 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.114 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:25.114 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.114 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.114 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.114 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:25.114 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.114 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.114 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.114 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:25.114 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.114 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.114 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.114 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:25.114 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.114 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.114 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.114 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:25.114 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.114 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.114 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.114 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:25.114 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.114 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.114 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.114 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:25.114 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.114 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.114 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.114 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:25.114 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.114 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.114 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.114 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:25.114 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.114 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.115 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.115 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:25.115 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.115 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.115 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.115 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:25.115 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.115 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.115 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.115 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:25.115 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.115 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.115 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.115 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:25.115 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.115 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.115 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.115 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:25.115 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.115 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.115 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.115 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:25.115 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.115 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.115 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.115 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:25.115 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.115 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.115 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.115 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:25.115 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.115 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.115 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.115 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:25.115 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:21:25.115 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:21:25.115 11:32:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:21:25.115 11:32:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:21:25.115 11:32:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:21:25.115 11:32:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:21:25.115 11:32:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:21:25.115 11:32:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:21:25.115 11:32:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:21:25.115 11:32:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:21:25.115 11:32:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:21:25.115 11:32:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:21:25.115 11:32:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:21:25.115 11:32:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:21:25.115 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:21:25.115 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:21:25.115 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:21:25.115 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:21:25.115 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:21:25.115 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:21:25.115 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:21:25.115 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:21:25.115 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:21:25.115 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.115 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.115 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48116852 kB' 'MemFree: 37234124 kB' 'MemUsed: 10882728 kB' 'SwapCached: 0 kB' 'Active: 8629968 kB' 'Inactive: 191208 kB' 'Active(anon): 8244760 kB' 'Inactive(anon): 0 kB' 'Active(file): 385208 kB' 'Inactive(file): 191208 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8367952 kB' 'Mapped: 134160 kB' 'AnonPages: 456332 kB' 'Shmem: 7791536 kB' 'KernelStack: 9896 kB' 'PageTables: 5872 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 324460 kB' 'Slab: 525936 kB' 'SReclaimable: 324460 kB' 'SUnreclaim: 201476 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:21:25.115 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:25.115 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.115 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.115 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.115 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:25.115 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.115 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.115 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.115 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:25.115 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.115 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.115 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.115 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:25.115 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.115 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.116 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.116 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:25.116 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.116 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.116 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.116 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:25.116 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.116 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.116 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.116 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:25.116 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.116 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.116 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.116 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:25.116 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.116 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.116 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.116 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:25.116 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.116 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.116 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.116 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:25.116 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.116 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.116 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.116 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:25.116 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.116 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.116 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.116 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:25.116 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.116 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.116 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.116 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:25.116 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.116 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.116 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.116 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:25.116 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.116 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.116 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.116 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:25.116 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.116 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.116 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.116 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:25.116 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.116 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.116 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.116 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:25.116 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.116 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.116 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.116 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:25.116 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.116 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.116 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.116 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:25.116 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.116 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.116 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.116 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:25.116 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.116 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.116 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.116 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:25.116 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.116 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.116 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.116 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:25.116 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.116 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.116 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.116 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:25.116 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.116 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.116 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.116 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:25.116 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.116 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.116 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.116 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:25.116 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.116 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.117 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.117 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:25.117 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.117 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.117 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.117 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:25.117 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.117 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.117 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.117 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:25.117 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.117 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.117 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.117 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:25.117 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.117 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.117 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.117 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:25.117 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.117 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.117 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.117 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:25.117 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.117 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.117 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.117 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:25.117 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.117 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.117 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.117 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:25.117 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.117 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.117 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.117 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:25.117 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.117 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.117 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.117 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:25.117 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.117 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.117 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.117 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:25.117 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.117 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.117 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.117 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:25.117 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:21:25.117 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:21:25.117 11:32:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:21:25.117 11:32:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:21:25.117 11:32:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:21:25.117 11:32:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:21:25.117 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:21:25.117 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:21:25.117 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:21:25.117 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:21:25.117 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:21:25.117 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:21:25.117 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:21:25.117 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:21:25.117 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:21:25.117 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.117 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.117 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44176536 kB' 'MemFree: 33587348 kB' 'MemUsed: 10589188 kB' 'SwapCached: 0 kB' 'Active: 4257312 kB' 'Inactive: 3548700 kB' 'Active(anon): 4092188 kB' 'Inactive(anon): 0 kB' 'Active(file): 165124 kB' 'Inactive(file): 3548700 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7600516 kB' 'Mapped: 57824 kB' 'AnonPages: 205608 kB' 'Shmem: 3886692 kB' 'KernelStack: 6520 kB' 'PageTables: 2672 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 211496 kB' 'Slab: 354356 kB' 'SReclaimable: 211496 kB' 'SUnreclaim: 142860 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:21:25.117 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:25.117 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.117 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.117 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.117 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:25.117 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.117 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.117 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.117 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:25.117 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.117 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.118 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.118 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:25.118 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.118 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.118 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.118 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:25.118 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.118 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.118 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.118 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:25.118 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.118 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.118 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.118 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:25.118 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.118 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.118 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.118 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:25.118 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.118 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.118 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.118 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:25.118 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.118 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.118 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.118 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:25.118 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.118 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.118 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.118 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:25.118 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.118 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.118 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.118 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:25.118 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.118 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.118 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.118 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:25.118 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.118 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.118 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.118 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:25.118 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.118 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.118 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.118 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:25.118 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.118 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.118 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.118 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:25.118 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.118 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.118 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.118 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:25.118 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.118 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.118 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.118 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:25.118 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.118 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.118 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.118 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:25.118 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.118 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.118 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.118 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:25.118 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.118 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.118 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.118 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:25.118 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.118 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.118 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.118 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:25.118 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.118 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.118 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.118 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:25.118 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.118 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.118 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.118 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:25.118 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.118 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.118 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.118 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:25.118 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.118 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.118 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.118 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:25.118 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.118 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.118 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.118 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:25.118 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.118 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.118 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.118 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:25.118 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.118 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.118 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.118 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:25.118 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.118 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.118 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.118 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:25.118 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.118 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.118 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.118 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:25.118 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.118 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.118 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.118 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:25.118 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.118 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.118 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.118 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:25.118 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.118 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.118 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.119 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:25.119 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.119 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.119 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.119 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:25.119 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.119 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.119 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.119 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:25.119 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:25.119 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:25.119 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:25.119 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:25.119 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:21:25.119 11:32:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:21:25.119 11:32:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:21:25.119 11:32:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:21:25.119 11:32:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:21:25.119 11:32:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:21:25.119 11:32:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:21:25.119 node0=512 expecting 512 00:21:25.119 11:32:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:21:25.119 11:32:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:21:25.119 11:32:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:21:25.119 11:32:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:21:25.119 node1=1024 expecting 1024 00:21:25.119 11:32:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:21:25.119 00:21:25.119 real 0m3.871s 00:21:25.119 user 0m1.475s 00:21:25.119 sys 0m2.491s 00:21:25.119 11:32:08 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:21:25.119 11:32:08 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:21:25.119 ************************************ 00:21:25.119 END TEST custom_alloc 00:21:25.119 ************************************ 00:21:25.119 11:32:08 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:21:25.119 11:32:08 setup.sh.hugepages -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:21:25.119 11:32:08 setup.sh.hugepages -- common/autotest_common.sh@1106 -- # xtrace_disable 00:21:25.119 11:32:08 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:21:25.119 ************************************ 00:21:25.119 START TEST no_shrink_alloc 00:21:25.119 ************************************ 00:21:25.119 11:32:08 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # no_shrink_alloc 00:21:25.119 11:32:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:21:25.119 11:32:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:21:25.119 11:32:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:21:25.119 11:32:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:21:25.119 11:32:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:21:25.119 11:32:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:21:25.119 11:32:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:21:25.119 11:32:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:21:25.119 11:32:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:21:25.119 11:32:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:21:25.119 11:32:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:21:25.119 11:32:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:21:25.119 11:32:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:21:25.119 11:32:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:21:25.119 11:32:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:21:25.119 11:32:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:21:25.119 11:32:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:21:25.119 11:32:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:21:25.119 11:32:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:21:25.119 11:32:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:21:25.119 11:32:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:21:25.119 11:32:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:21:28.407 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:21:28.407 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:21:28.407 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:21:28.407 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:21:28.407 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:21:28.407 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:21:28.407 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:21:28.407 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:21:28.407 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:21:28.407 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:21:28.407 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:21:28.407 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:21:28.407 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:21:28.407 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:21:28.407 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:21:28.407 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:21:28.407 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:21:28.407 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:21:28.407 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:21:28.407 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:21:28.407 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:21:28.407 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:21:28.407 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:21:28.407 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:21:28.407 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:21:28.407 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:21:28.407 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:21:28.407 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:21:28.407 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:21:28.407 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:21:28.407 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:21:28.407 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:21:28.407 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:21:28.407 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:21:28.407 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:21:28.407 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.407 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.407 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293388 kB' 'MemFree: 71853080 kB' 'MemAvailable: 75843064 kB' 'Buffers: 9244 kB' 'Cached: 15959312 kB' 'SwapCached: 0 kB' 'Active: 12887116 kB' 'Inactive: 3739908 kB' 'Active(anon): 12336784 kB' 'Inactive(anon): 0 kB' 'Active(file): 550332 kB' 'Inactive(file): 3739908 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 661160 kB' 'Mapped: 192096 kB' 'Shmem: 11678316 kB' 'KReclaimable: 535956 kB' 'Slab: 880060 kB' 'SReclaimable: 535956 kB' 'SUnreclaim: 344104 kB' 'KernelStack: 16432 kB' 'PageTables: 8504 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486720 kB' 'Committed_AS: 13733744 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 201800 kB' 'VmallocChunk: 0 kB' 'Percpu: 67520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 791884 kB' 'DirectMap2M: 31389696 kB' 'DirectMap1G: 69206016 kB' 00:21:28.407 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:28.407 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.407 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.407 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.407 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:28.407 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.407 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.407 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.407 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:28.407 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.407 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.407 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.407 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:28.407 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.407 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.407 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.407 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:28.407 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.407 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.407 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.407 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:28.407 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.407 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.407 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.407 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:28.407 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.407 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.407 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.407 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:28.407 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.407 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.407 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.407 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:28.407 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.407 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.407 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.407 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:28.407 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.407 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.407 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.407 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:28.407 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.407 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.407 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.407 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:28.407 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.407 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.407 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.407 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:28.407 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.407 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.407 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.407 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:28.407 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.407 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.407 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.407 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:28.407 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.407 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.407 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.407 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:28.407 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.407 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.407 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.407 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:28.407 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.407 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.407 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.407 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:28.407 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.407 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.407 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.407 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:28.407 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.407 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.407 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.407 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:28.407 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.407 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.407 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.407 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:28.407 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.407 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.407 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.407 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:28.407 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.407 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.407 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293388 kB' 'MemFree: 71852832 kB' 'MemAvailable: 75842816 kB' 'Buffers: 9244 kB' 'Cached: 15959316 kB' 'SwapCached: 0 kB' 'Active: 12886340 kB' 'Inactive: 3739908 kB' 'Active(anon): 12336008 kB' 'Inactive(anon): 0 kB' 'Active(file): 550332 kB' 'Inactive(file): 3739908 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 660860 kB' 'Mapped: 191992 kB' 'Shmem: 11678320 kB' 'KReclaimable: 535956 kB' 'Slab: 880068 kB' 'SReclaimable: 535956 kB' 'SUnreclaim: 344112 kB' 'KernelStack: 16416 kB' 'PageTables: 8424 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486720 kB' 'Committed_AS: 13733764 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 201800 kB' 'VmallocChunk: 0 kB' 'Percpu: 67520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 791884 kB' 'DirectMap2M: 31389696 kB' 'DirectMap1G: 69206016 kB' 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.408 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:28.409 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.409 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.409 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.409 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:28.409 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.409 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.409 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.409 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:28.409 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.409 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.409 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.409 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:28.409 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.409 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.409 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.409 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:28.409 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.409 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.409 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.409 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:28.409 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.409 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.409 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.409 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:28.409 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.409 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.409 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.409 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:28.409 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.409 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.409 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.409 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:28.409 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.409 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.409 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.409 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:28.409 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.409 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.409 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.409 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:28.409 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.409 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.409 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.409 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:28.409 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.409 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.409 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.409 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:28.409 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.409 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.409 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.409 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:28.409 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.409 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.409 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.409 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:28.409 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.409 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.409 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.409 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:28.409 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.409 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.409 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.409 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:28.409 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.409 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.409 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.409 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:28.409 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.409 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.409 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.409 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:28.409 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.409 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.409 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.409 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:28.409 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:21:28.409 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:21:28.409 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:21:28.409 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:21:28.409 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:21:28.409 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:21:28.409 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:21:28.409 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:21:28.409 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:21:28.409 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:21:28.409 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:21:28.409 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:21:28.409 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:21:28.409 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.409 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.409 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293388 kB' 'MemFree: 71852832 kB' 'MemAvailable: 75842816 kB' 'Buffers: 9244 kB' 'Cached: 15959316 kB' 'SwapCached: 0 kB' 'Active: 12886028 kB' 'Inactive: 3739908 kB' 'Active(anon): 12335696 kB' 'Inactive(anon): 0 kB' 'Active(file): 550332 kB' 'Inactive(file): 3739908 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 660560 kB' 'Mapped: 191992 kB' 'Shmem: 11678320 kB' 'KReclaimable: 535956 kB' 'Slab: 880068 kB' 'SReclaimable: 535956 kB' 'SUnreclaim: 344112 kB' 'KernelStack: 16416 kB' 'PageTables: 8424 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486720 kB' 'Committed_AS: 13733784 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 201800 kB' 'VmallocChunk: 0 kB' 'Percpu: 67520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 791884 kB' 'DirectMap2M: 31389696 kB' 'DirectMap1G: 69206016 kB' 00:21:28.409 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:28.409 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.409 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.409 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.409 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:28.409 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.409 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.409 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.409 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:28.409 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.409 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.409 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.409 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:28.409 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.409 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.409 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.409 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:28.409 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.409 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.409 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.409 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:28.409 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.409 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.409 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.409 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:28.409 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.409 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.409 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.409 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:28.409 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.409 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.409 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.409 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:28.409 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.409 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.409 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.409 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:28.409 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.409 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.409 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.409 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:28.409 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.409 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.409 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.409 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:28.409 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.409 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.409 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.409 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:28.409 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.409 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.409 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.409 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:28.409 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.409 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.409 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.409 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:28.409 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.409 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.409 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.409 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:28.409 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.409 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.409 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.409 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:28.409 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.409 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.409 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.409 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:28.409 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.409 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.409 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.409 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:28.409 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.409 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.409 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.409 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:28.409 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.409 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.409 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.409 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:28.409 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.409 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.409 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.409 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:28.409 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.409 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.409 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.409 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:28.409 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.409 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.409 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.409 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:28.409 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.409 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.409 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.409 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:28.409 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:21:28.410 nr_hugepages=1024 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:21:28.410 resv_hugepages=0 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:21:28.410 surplus_hugepages=0 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:21:28.410 anon_hugepages=0 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293388 kB' 'MemFree: 71852964 kB' 'MemAvailable: 75842948 kB' 'Buffers: 9244 kB' 'Cached: 15959356 kB' 'SwapCached: 0 kB' 'Active: 12886364 kB' 'Inactive: 3739908 kB' 'Active(anon): 12336032 kB' 'Inactive(anon): 0 kB' 'Active(file): 550332 kB' 'Inactive(file): 3739908 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 660860 kB' 'Mapped: 191992 kB' 'Shmem: 11678360 kB' 'KReclaimable: 535956 kB' 'Slab: 880068 kB' 'SReclaimable: 535956 kB' 'SUnreclaim: 344112 kB' 'KernelStack: 16416 kB' 'PageTables: 8424 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486720 kB' 'Committed_AS: 13733808 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 201800 kB' 'VmallocChunk: 0 kB' 'Percpu: 67520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 791884 kB' 'DirectMap2M: 31389696 kB' 'DirectMap1G: 69206016 kB' 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.410 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48116852 kB' 'MemFree: 36171788 kB' 'MemUsed: 11945064 kB' 'SwapCached: 0 kB' 'Active: 8629140 kB' 'Inactive: 191208 kB' 'Active(anon): 8243932 kB' 'Inactive(anon): 0 kB' 'Active(file): 385208 kB' 'Inactive(file): 191208 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8368072 kB' 'Mapped: 134160 kB' 'AnonPages: 455416 kB' 'Shmem: 7791656 kB' 'KernelStack: 9880 kB' 'PageTables: 5728 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 324460 kB' 'Slab: 525568 kB' 'SReclaimable: 324460 kB' 'SUnreclaim: 201108 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.411 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.412 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:28.412 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.412 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.412 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.412 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:28.412 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.412 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.412 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.412 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:28.412 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.412 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.412 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.412 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:28.412 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.412 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.412 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.412 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:28.412 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.412 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.412 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.412 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:28.412 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.412 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.412 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.412 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:28.412 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.412 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.412 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.412 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:28.412 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.412 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.412 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.412 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:28.412 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.412 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.412 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.412 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:28.412 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.412 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.412 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.412 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:28.412 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.412 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.412 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.412 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:28.412 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.412 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.412 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.412 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:28.412 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:28.412 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:28.412 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:28.412 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:28.412 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:21:28.412 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:21:28.412 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:21:28.412 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:21:28.412 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:21:28.412 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:21:28.412 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:21:28.412 node0=1024 expecting 1024 00:21:28.412 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:21:28.412 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:21:28.412 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:21:28.412 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:21:28.412 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:21:28.412 11:32:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:21:32.606 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:21:32.606 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:21:32.606 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:21:32.606 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:21:32.606 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:21:32.606 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:21:32.606 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:21:32.606 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:21:32.606 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:21:32.606 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:21:32.606 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:21:32.606 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:21:32.606 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:21:32.606 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:21:32.606 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:21:32.606 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:21:32.606 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:21:32.606 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:21:32.606 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:21:32.606 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:21:32.606 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:21:32.606 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:21:32.606 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:21:32.606 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:21:32.606 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:21:32.606 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:21:32.606 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:21:32.606 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:21:32.606 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:21:32.606 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:21:32.606 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:21:32.606 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:21:32.606 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:21:32.606 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:21:32.606 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:21:32.606 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:21:32.606 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.606 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.606 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293388 kB' 'MemFree: 71864144 kB' 'MemAvailable: 75854128 kB' 'Buffers: 9244 kB' 'Cached: 15959440 kB' 'SwapCached: 0 kB' 'Active: 12886832 kB' 'Inactive: 3739908 kB' 'Active(anon): 12336500 kB' 'Inactive(anon): 0 kB' 'Active(file): 550332 kB' 'Inactive(file): 3739908 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 660888 kB' 'Mapped: 192096 kB' 'Shmem: 11678444 kB' 'KReclaimable: 535956 kB' 'Slab: 880320 kB' 'SReclaimable: 535956 kB' 'SUnreclaim: 344364 kB' 'KernelStack: 16432 kB' 'PageTables: 8508 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486720 kB' 'Committed_AS: 13734128 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 201832 kB' 'VmallocChunk: 0 kB' 'Percpu: 67520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 791884 kB' 'DirectMap2M: 31389696 kB' 'DirectMap1G: 69206016 kB' 00:21:32.606 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:32.606 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.606 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.607 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.607 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:32.607 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.607 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.607 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.607 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:32.607 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.607 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.607 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.607 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:32.607 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.607 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.607 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.607 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:32.607 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.607 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.607 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.607 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:32.607 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.607 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.607 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.607 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:32.607 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.607 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.607 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.607 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:32.607 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.607 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.607 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.607 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:32.607 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.607 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.607 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.607 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:32.607 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.607 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.607 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.607 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:32.607 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.607 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.607 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.607 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:32.607 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.607 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.607 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.607 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:32.607 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.607 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.607 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.607 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:32.607 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.607 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.607 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.607 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:32.607 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.607 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.607 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.607 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:32.607 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.607 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.607 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.607 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:32.607 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.607 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.607 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.607 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:32.607 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.607 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.607 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.607 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:32.607 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.607 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.607 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.607 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:32.607 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.607 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.607 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.607 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:32.607 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.607 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.607 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.607 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:32.607 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.607 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.607 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.607 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:32.607 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.607 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.607 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.607 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:32.607 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.607 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.607 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.607 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:32.607 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.608 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.608 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.608 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:32.608 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.608 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.608 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.608 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:32.608 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.608 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.608 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.608 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:32.608 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.608 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.608 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.608 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:32.608 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.608 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.608 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.608 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:32.608 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.608 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.608 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.608 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:32.608 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.608 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.608 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.608 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:32.608 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.608 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.608 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.608 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:32.608 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.608 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.608 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.608 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:32.608 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.608 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.608 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.608 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:32.608 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.608 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.608 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.608 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:32.608 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.608 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.608 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.608 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:32.608 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.608 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.608 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.608 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:32.608 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.608 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.608 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.608 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:32.608 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.608 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.608 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.608 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:32.608 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.608 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.608 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.608 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:32.608 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:21:32.608 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:21:32.608 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:21:32.608 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:21:32.608 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:21:32.608 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:21:32.608 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:21:32.608 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:21:32.608 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:21:32.608 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:21:32.608 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:21:32.608 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:21:32.608 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:21:32.608 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.608 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.608 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293388 kB' 'MemFree: 71864744 kB' 'MemAvailable: 75854728 kB' 'Buffers: 9244 kB' 'Cached: 15959440 kB' 'SwapCached: 0 kB' 'Active: 12886708 kB' 'Inactive: 3739908 kB' 'Active(anon): 12336376 kB' 'Inactive(anon): 0 kB' 'Active(file): 550332 kB' 'Inactive(file): 3739908 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 661204 kB' 'Mapped: 191996 kB' 'Shmem: 11678444 kB' 'KReclaimable: 535956 kB' 'Slab: 880280 kB' 'SReclaimable: 535956 kB' 'SUnreclaim: 344324 kB' 'KernelStack: 16416 kB' 'PageTables: 8444 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486720 kB' 'Committed_AS: 13734144 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 201816 kB' 'VmallocChunk: 0 kB' 'Percpu: 67520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 791884 kB' 'DirectMap2M: 31389696 kB' 'DirectMap1G: 69206016 kB' 00:21:32.609 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:32.609 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.609 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.609 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.609 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:32.609 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.609 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.609 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.609 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:32.609 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.609 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.609 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.609 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:32.609 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.609 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.609 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.609 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:32.609 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.609 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.609 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.609 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:32.609 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.609 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.609 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.609 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:32.609 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.609 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.609 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.609 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:32.609 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.609 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.609 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.609 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:32.609 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.609 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.609 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.609 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:32.609 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.609 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.609 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.609 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:32.609 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.609 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.609 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.609 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:32.609 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.609 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.609 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.609 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:32.609 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.609 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.609 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.609 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:32.609 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.609 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.609 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.609 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:32.609 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.609 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.609 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.609 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:32.609 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.609 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.609 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.609 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:32.609 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.609 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.609 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.609 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:32.609 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.609 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.609 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.609 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:32.609 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.609 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.609 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.609 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:32.609 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.609 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.610 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.610 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:32.610 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.610 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.610 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.610 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:32.610 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.610 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.610 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.610 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:32.610 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.610 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.610 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.610 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:32.610 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.610 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.610 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.610 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:32.610 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.610 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.610 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.610 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:32.610 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.610 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.610 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.610 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:32.610 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.610 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.610 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.610 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:32.610 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.610 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.610 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.610 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:32.610 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.610 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.610 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.610 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:32.610 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.610 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.610 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.610 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:32.610 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.610 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.610 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.610 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:32.610 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.610 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.610 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.610 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:32.610 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.610 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.610 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.610 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:32.610 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.610 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.610 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.610 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:32.610 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.610 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.610 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.610 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:32.610 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.610 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.610 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.610 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:32.610 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.610 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.610 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.610 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:32.610 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.610 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.610 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.610 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:32.610 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.610 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.610 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.610 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:32.610 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.611 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.611 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.611 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:32.611 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.611 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.611 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.611 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:32.611 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.611 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.611 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.611 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:32.611 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.611 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.611 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.611 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:32.611 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.611 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.611 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.611 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:32.611 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.611 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.611 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.611 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:32.611 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.611 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.611 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.611 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:32.611 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.611 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.611 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.611 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:32.611 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.611 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.611 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.611 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:32.611 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.611 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.611 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.611 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:32.611 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.611 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.611 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.611 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:32.611 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.611 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.611 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.611 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:32.611 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:21:32.611 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:21:32.611 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:21:32.611 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:21:32.611 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:21:32.611 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:21:32.611 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:21:32.611 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:21:32.611 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:21:32.611 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:21:32.611 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:21:32.611 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:21:32.611 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:21:32.611 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.611 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.611 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293388 kB' 'MemFree: 71864876 kB' 'MemAvailable: 75854860 kB' 'Buffers: 9244 kB' 'Cached: 15959460 kB' 'SwapCached: 0 kB' 'Active: 12886744 kB' 'Inactive: 3739908 kB' 'Active(anon): 12336412 kB' 'Inactive(anon): 0 kB' 'Active(file): 550332 kB' 'Inactive(file): 3739908 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 661204 kB' 'Mapped: 191996 kB' 'Shmem: 11678464 kB' 'KReclaimable: 535956 kB' 'Slab: 880280 kB' 'SReclaimable: 535956 kB' 'SUnreclaim: 344324 kB' 'KernelStack: 16416 kB' 'PageTables: 8444 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486720 kB' 'Committed_AS: 13734168 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 201816 kB' 'VmallocChunk: 0 kB' 'Percpu: 67520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 791884 kB' 'DirectMap2M: 31389696 kB' 'DirectMap1G: 69206016 kB' 00:21:32.611 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:32.611 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.611 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.611 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.612 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:32.612 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.612 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.612 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.612 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:32.612 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.612 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.612 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.612 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:32.612 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.612 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.612 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.612 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:32.612 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.612 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.612 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.612 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:32.612 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.612 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.612 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.612 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:32.612 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.612 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.612 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.612 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:32.612 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.612 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.612 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.612 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:32.612 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.612 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.612 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.612 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:32.612 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.612 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.612 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.612 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:32.612 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.612 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.612 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.612 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:32.612 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.612 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.612 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.612 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:32.612 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.612 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.612 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.612 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:32.612 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.612 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.612 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.612 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:32.612 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.612 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.612 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.612 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:32.612 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.612 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.612 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.612 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:32.612 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.612 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.612 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.613 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:32.613 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.613 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.613 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.613 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:32.613 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.613 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.613 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.613 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:32.613 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.613 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.613 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.613 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:32.613 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.613 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.613 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.613 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:32.613 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.613 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.613 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.613 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:32.613 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.613 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.613 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.613 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:32.613 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.613 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.613 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.613 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:32.613 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.613 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.613 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.613 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:32.613 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.613 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.613 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.613 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:32.613 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.613 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.613 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.613 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:32.613 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.613 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.613 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.613 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:32.613 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.613 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.613 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.613 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:32.613 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.613 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.613 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.613 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:32.613 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.613 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.613 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.613 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:32.613 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.613 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.613 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.613 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:32.613 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.613 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.613 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.613 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:32.613 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.613 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.613 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.613 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:32.614 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.614 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.614 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.614 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:32.614 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.614 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.614 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.614 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:32.614 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.614 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.614 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.614 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:32.614 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.614 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.614 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.614 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:32.614 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.614 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.614 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.614 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:32.614 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.614 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.614 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.614 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:32.614 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.614 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.614 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.614 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:32.614 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.614 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.614 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.614 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:32.614 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.614 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.614 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.614 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:32.614 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.614 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.614 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.614 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:32.614 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.614 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.614 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.614 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:32.614 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.614 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.614 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.614 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:32.614 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.614 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.614 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.614 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:32.614 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.614 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.614 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.614 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:32.614 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.614 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.614 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.614 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:32.614 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.614 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.614 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.614 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:32.614 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:21:32.614 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:21:32.614 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:21:32.614 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:21:32.614 nr_hugepages=1024 00:21:32.614 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:21:32.614 resv_hugepages=0 00:21:32.614 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:21:32.614 surplus_hugepages=0 00:21:32.614 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:21:32.614 anon_hugepages=0 00:21:32.614 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:21:32.614 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:21:32.614 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:21:32.614 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:21:32.614 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:21:32.614 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:21:32.614 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:21:32.614 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:21:32.614 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:21:32.615 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:21:32.615 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:21:32.615 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:21:32.615 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293388 kB' 'MemFree: 71863620 kB' 'MemAvailable: 75853604 kB' 'Buffers: 9244 kB' 'Cached: 15959480 kB' 'SwapCached: 0 kB' 'Active: 12886968 kB' 'Inactive: 3739908 kB' 'Active(anon): 12336636 kB' 'Inactive(anon): 0 kB' 'Active(file): 550332 kB' 'Inactive(file): 3739908 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 661444 kB' 'Mapped: 191996 kB' 'Shmem: 11678484 kB' 'KReclaimable: 535956 kB' 'Slab: 880280 kB' 'SReclaimable: 535956 kB' 'SUnreclaim: 344324 kB' 'KernelStack: 16432 kB' 'PageTables: 8528 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486720 kB' 'Committed_AS: 13733820 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 201800 kB' 'VmallocChunk: 0 kB' 'Percpu: 67520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 791884 kB' 'DirectMap2M: 31389696 kB' 'DirectMap1G: 69206016 kB' 00:21:32.615 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.615 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.615 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:32.615 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.615 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.615 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.615 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:32.615 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.615 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.615 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.615 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:32.615 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.615 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.615 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.615 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:32.615 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.615 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.615 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.615 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:32.615 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.615 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.615 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.615 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:32.615 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.615 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.615 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.615 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:32.615 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.615 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.615 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.615 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:32.615 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.615 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.615 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.615 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:32.615 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.615 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.615 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.615 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:32.615 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.615 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.615 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.615 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:32.615 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.615 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.615 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.615 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:32.615 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.615 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.615 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.615 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:32.615 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.615 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.615 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.615 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:32.615 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.615 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.615 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.615 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:32.615 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.615 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.615 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.615 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:32.615 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.615 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.615 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.615 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:32.616 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.616 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.616 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.616 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:32.616 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.616 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.616 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.616 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:32.616 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.616 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.616 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.616 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:32.616 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.616 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.616 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.616 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:32.616 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.616 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.616 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.616 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:32.616 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.616 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.616 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.616 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:32.616 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.616 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.616 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.616 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:32.616 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.616 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.616 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.616 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:32.616 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.616 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.616 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.616 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:32.616 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.616 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.616 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.616 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:32.616 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.616 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.616 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.616 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:32.616 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.616 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.616 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.616 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:32.616 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.616 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.616 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.616 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:32.616 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.616 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.616 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.616 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:32.616 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.616 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.616 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.616 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:32.616 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.616 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.616 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.616 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:32.616 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.616 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.616 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.616 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:32.616 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.616 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.616 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.616 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:32.616 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.616 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.616 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.616 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:32.616 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.616 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.616 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.616 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:32.616 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.616 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.616 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.616 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:32.616 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.616 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.616 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.616 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:32.617 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.617 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.617 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.617 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:32.617 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.617 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.617 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.617 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:32.617 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.617 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.617 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.617 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:32.617 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.617 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.617 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.617 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:32.617 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.617 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.617 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.617 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:32.617 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.617 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.617 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.617 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:32.617 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.617 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.617 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.617 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:32.617 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.617 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.617 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.617 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:32.617 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.617 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.617 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.617 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:32.617 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.617 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.617 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.617 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:32.617 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:21:32.617 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:21:32.617 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:21:32.617 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:21:32.617 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:21:32.617 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:21:32.617 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:21:32.617 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:21:32.617 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:21:32.617 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:21:32.617 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:21:32.617 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:21:32.617 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:21:32.617 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:21:32.617 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:21:32.617 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:21:32.617 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:21:32.617 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:21:32.617 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:21:32.617 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:21:32.617 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:21:32.617 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:21:32.617 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:21:32.617 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.617 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.617 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48116852 kB' 'MemFree: 36180660 kB' 'MemUsed: 11936192 kB' 'SwapCached: 0 kB' 'Active: 8630388 kB' 'Inactive: 191208 kB' 'Active(anon): 8245180 kB' 'Inactive(anon): 0 kB' 'Active(file): 385208 kB' 'Inactive(file): 191208 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8368196 kB' 'Mapped: 134160 kB' 'AnonPages: 456684 kB' 'Shmem: 7791780 kB' 'KernelStack: 9864 kB' 'PageTables: 5724 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 324460 kB' 'Slab: 525688 kB' 'SReclaimable: 324460 kB' 'SUnreclaim: 201228 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:21:32.617 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:32.617 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.617 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.617 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.617 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:32.617 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.617 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.618 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.618 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:32.618 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.618 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.618 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.618 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:32.618 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.618 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.618 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.618 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:32.618 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.618 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.618 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.618 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:32.618 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.618 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.618 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.618 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:32.618 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.618 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.618 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.618 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:32.618 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.618 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.618 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.618 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:32.618 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.618 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.618 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.618 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:32.618 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.618 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.618 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.618 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:32.618 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.618 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.618 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.618 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:32.618 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.618 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.618 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.618 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:32.618 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.618 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.618 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.618 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:32.618 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.618 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.618 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.618 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:32.618 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.618 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.618 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.618 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:32.618 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.618 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.618 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.618 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:32.618 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.618 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.618 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.618 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:32.618 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.618 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.618 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.618 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:32.618 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.618 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.618 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.618 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:32.618 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.618 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.618 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.618 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:32.618 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.618 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.618 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.618 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:32.618 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.618 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.618 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.618 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:32.618 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.618 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.618 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.618 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:32.618 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.618 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.618 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.618 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:32.618 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.618 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.618 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.618 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:32.618 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.618 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.618 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.618 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:32.618 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.618 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.618 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.618 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:32.619 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.619 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.619 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.619 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:32.619 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.619 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.619 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.619 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:32.619 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.619 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.619 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.619 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:32.619 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.619 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.619 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.619 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:32.619 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.619 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.619 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.619 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:32.619 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.619 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.619 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.619 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:32.619 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.619 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.619 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.619 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:32.619 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.619 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.619 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.619 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:32.619 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:32.619 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:32.619 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:32.619 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:32.619 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:21:32.619 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:21:32.619 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:21:32.619 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:21:32.619 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:21:32.619 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:21:32.619 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:21:32.619 node0=1024 expecting 1024 00:21:32.619 11:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:21:32.619 00:21:32.619 real 0m7.345s 00:21:32.619 user 0m2.760s 00:21:32.619 sys 0m4.760s 00:21:32.619 11:32:15 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:21:32.619 11:32:15 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:21:32.619 ************************************ 00:21:32.619 END TEST no_shrink_alloc 00:21:32.619 ************************************ 00:21:32.619 11:32:16 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:21:32.619 11:32:16 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:21:32.619 11:32:16 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:21:32.619 11:32:16 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:21:32.619 11:32:16 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:21:32.619 11:32:16 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:21:32.619 11:32:16 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:21:32.619 11:32:16 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:21:32.619 11:32:16 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:21:32.619 11:32:16 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:21:32.619 11:32:16 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:21:32.619 11:32:16 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:21:32.619 11:32:16 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:21:32.619 11:32:16 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:21:32.619 00:21:32.619 real 0m31.115s 00:21:32.619 user 0m9.883s 00:21:32.619 sys 0m16.979s 00:21:32.619 11:32:16 setup.sh.hugepages -- common/autotest_common.sh@1125 -- # xtrace_disable 00:21:32.619 11:32:16 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:21:32.619 ************************************ 00:21:32.619 END TEST hugepages 00:21:32.619 ************************************ 00:21:32.619 11:32:16 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/driver.sh 00:21:32.619 11:32:16 setup.sh -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:21:32.619 11:32:16 setup.sh -- common/autotest_common.sh@1106 -- # xtrace_disable 00:21:32.619 11:32:16 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:21:32.619 ************************************ 00:21:32.619 START TEST driver 00:21:32.619 ************************************ 00:21:32.619 11:32:16 setup.sh.driver -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/driver.sh 00:21:32.619 * Looking for test storage... 00:21:32.619 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:21:32.619 11:32:16 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:21:32.619 11:32:16 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:21:32.619 11:32:16 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:21:37.887 11:32:21 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:21:37.887 11:32:21 setup.sh.driver -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:21:37.887 11:32:21 setup.sh.driver -- common/autotest_common.sh@1106 -- # xtrace_disable 00:21:37.887 11:32:21 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:21:37.887 ************************************ 00:21:37.887 START TEST guess_driver 00:21:37.887 ************************************ 00:21:37.887 11:32:21 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # guess_driver 00:21:37.887 11:32:21 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:21:37.887 11:32:21 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:21:37.887 11:32:21 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:21:37.887 11:32:21 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:21:37.887 11:32:21 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:21:37.887 11:32:21 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:21:37.887 11:32:21 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:21:37.887 11:32:21 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:21:37.887 11:32:21 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:21:37.887 11:32:21 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 214 > 0 )) 00:21:37.887 11:32:21 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:21:37.887 11:32:21 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:21:37.887 11:32:21 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:21:37.887 11:32:21 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:21:37.887 11:32:21 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:21:37.887 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:21:37.887 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:21:37.887 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:21:37.887 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:21:37.887 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:21:37.887 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:21:37.887 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:21:37.887 11:32:21 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:21:37.887 11:32:21 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:21:37.887 11:32:21 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:21:37.887 11:32:21 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:21:37.887 11:32:21 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:21:37.887 Looking for driver=vfio-pci 00:21:37.887 11:32:21 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:21:37.887 11:32:21 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:21:37.887 11:32:21 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:21:37.887 11:32:21 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:21:41.169 11:32:24 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:21:41.169 11:32:24 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:21:41.169 11:32:24 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:21:41.169 11:32:24 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:21:41.169 11:32:24 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:21:41.169 11:32:24 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:21:41.169 11:32:24 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:21:41.169 11:32:24 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:21:41.169 11:32:24 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:21:41.169 11:32:24 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:21:41.169 11:32:24 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:21:41.169 11:32:24 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:21:41.169 11:32:24 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:21:41.169 11:32:24 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:21:41.169 11:32:24 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:21:41.169 11:32:24 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:21:41.169 11:32:24 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:21:41.169 11:32:24 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:21:41.169 11:32:24 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:21:41.169 11:32:24 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:21:41.169 11:32:24 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:21:41.169 11:32:24 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:21:41.170 11:32:24 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:21:41.170 11:32:24 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:21:41.170 11:32:24 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:21:41.170 11:32:24 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:21:41.170 11:32:24 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:21:41.170 11:32:24 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:21:41.170 11:32:24 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:21:41.170 11:32:24 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:21:41.170 11:32:24 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:21:41.170 11:32:24 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:21:41.170 11:32:24 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:21:41.170 11:32:24 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:21:41.170 11:32:24 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:21:41.170 11:32:24 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:21:41.170 11:32:24 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:21:41.170 11:32:24 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:21:41.170 11:32:24 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:21:41.170 11:32:24 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:21:41.170 11:32:24 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:21:41.170 11:32:24 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:21:41.170 11:32:24 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:21:41.170 11:32:24 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:21:41.170 11:32:24 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:21:41.170 11:32:24 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:21:41.170 11:32:24 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:21:41.170 11:32:24 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:21:46.445 11:32:29 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:21:46.445 11:32:29 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:21:46.445 11:32:29 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:21:46.445 11:32:29 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:21:46.445 11:32:29 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:21:46.445 11:32:29 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:21:46.445 11:32:29 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:21:50.642 00:21:50.642 real 0m13.013s 00:21:50.642 user 0m2.460s 00:21:50.642 sys 0m4.995s 00:21:50.642 11:32:34 setup.sh.driver.guess_driver -- common/autotest_common.sh@1125 -- # xtrace_disable 00:21:50.642 11:32:34 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:21:50.642 ************************************ 00:21:50.642 END TEST guess_driver 00:21:50.642 ************************************ 00:21:50.642 00:21:50.642 real 0m18.144s 00:21:50.642 user 0m3.936s 00:21:50.642 sys 0m7.934s 00:21:50.642 11:32:34 setup.sh.driver -- common/autotest_common.sh@1125 -- # xtrace_disable 00:21:50.642 11:32:34 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:21:50.642 ************************************ 00:21:50.642 END TEST driver 00:21:50.642 ************************************ 00:21:50.642 11:32:34 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/devices.sh 00:21:50.642 11:32:34 setup.sh -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:21:50.642 11:32:34 setup.sh -- common/autotest_common.sh@1106 -- # xtrace_disable 00:21:50.642 11:32:34 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:21:50.642 ************************************ 00:21:50.642 START TEST devices 00:21:50.642 ************************************ 00:21:50.642 11:32:34 setup.sh.devices -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/devices.sh 00:21:50.642 * Looking for test storage... 00:21:50.642 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:21:50.642 11:32:34 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:21:50.642 11:32:34 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:21:50.642 11:32:34 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:21:50.642 11:32:34 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:21:54.830 11:32:38 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:21:54.830 11:32:38 setup.sh.devices -- common/autotest_common.sh@1668 -- # zoned_devs=() 00:21:54.830 11:32:38 setup.sh.devices -- common/autotest_common.sh@1668 -- # local -gA zoned_devs 00:21:54.830 11:32:38 setup.sh.devices -- common/autotest_common.sh@1669 -- # local nvme bdf 00:21:54.830 11:32:38 setup.sh.devices -- common/autotest_common.sh@1671 -- # for nvme in /sys/block/nvme* 00:21:54.830 11:32:38 setup.sh.devices -- common/autotest_common.sh@1672 -- # is_block_zoned nvme0n1 00:21:54.830 11:32:38 setup.sh.devices -- common/autotest_common.sh@1661 -- # local device=nvme0n1 00:21:54.830 11:32:38 setup.sh.devices -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:21:54.830 11:32:38 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:21:54.830 11:32:38 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:21:54.830 11:32:38 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:21:54.830 11:32:38 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:21:54.830 11:32:38 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:21:54.830 11:32:38 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:21:54.830 11:32:38 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:21:54.830 11:32:38 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:21:54.830 11:32:38 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:21:54.830 11:32:38 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:5e:00.0 00:21:54.830 11:32:38 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\5\e\:\0\0\.\0* ]] 00:21:54.830 11:32:38 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:21:54.830 11:32:38 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:21:54.830 11:32:38 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:21:54.830 No valid GPT data, bailing 00:21:54.830 11:32:38 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:21:54.830 11:32:38 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:21:54.830 11:32:38 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:21:54.830 11:32:38 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:21:54.830 11:32:38 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:21:54.830 11:32:38 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:21:54.830 11:32:38 setup.sh.devices -- setup/common.sh@80 -- # echo 8001563222016 00:21:54.830 11:32:38 setup.sh.devices -- setup/devices.sh@204 -- # (( 8001563222016 >= min_disk_size )) 00:21:54.830 11:32:38 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:21:54.830 11:32:38 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:5e:00.0 00:21:54.830 11:32:38 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:21:54.830 11:32:38 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:21:54.830 11:32:38 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:21:54.830 11:32:38 setup.sh.devices -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:21:54.830 11:32:38 setup.sh.devices -- common/autotest_common.sh@1106 -- # xtrace_disable 00:21:54.830 11:32:38 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:21:54.830 ************************************ 00:21:54.830 START TEST nvme_mount 00:21:54.830 ************************************ 00:21:54.830 11:32:38 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # nvme_mount 00:21:54.830 11:32:38 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:21:54.830 11:32:38 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:21:54.830 11:32:38 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:21:54.830 11:32:38 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:21:54.830 11:32:38 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:21:54.830 11:32:38 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:21:54.831 11:32:38 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:21:54.831 11:32:38 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:21:54.831 11:32:38 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:21:54.831 11:32:38 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:21:54.831 11:32:38 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:21:54.831 11:32:38 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:21:54.831 11:32:38 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:21:54.831 11:32:38 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:21:54.831 11:32:38 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:21:54.831 11:32:38 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:21:54.831 11:32:38 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:21:54.831 11:32:38 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:21:54.831 11:32:38 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:21:55.766 Creating new GPT entries in memory. 00:21:55.766 GPT data structures destroyed! You may now partition the disk using fdisk or 00:21:55.766 other utilities. 00:21:55.766 11:32:39 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:21:55.766 11:32:39 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:21:55.766 11:32:39 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:21:55.766 11:32:39 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:21:55.766 11:32:39 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:21:56.701 Creating new GPT entries in memory. 00:21:56.701 The operation has completed successfully. 00:21:56.701 11:32:40 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:21:56.701 11:32:40 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:21:56.702 11:32:40 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 3294881 00:21:56.702 11:32:40 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:21:56.702 11:32:40 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount size= 00:21:56.702 11:32:40 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:21:56.702 11:32:40 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:21:56.702 11:32:40 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:21:56.702 11:32:40 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:21:56.702 11:32:40 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:5e:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:21:56.702 11:32:40 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:21:56.702 11:32:40 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:21:56.702 11:32:40 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:21:56.702 11:32:40 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:21:56.702 11:32:40 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:21:56.702 11:32:40 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:21:56.702 11:32:40 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:21:56.702 11:32:40 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:21:56.702 11:32:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:21:56.702 11:32:40 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:21:56.702 11:32:40 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:21:56.702 11:32:40 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:21:56.702 11:32:40 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:21:59.984 11:32:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:21:59.984 11:32:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:21:59.984 11:32:43 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:21:59.984 11:32:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:21:59.984 11:32:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:21:59.984 11:32:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:21:59.984 11:32:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:21:59.984 11:32:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:21:59.984 11:32:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:21:59.984 11:32:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:21:59.984 11:32:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:21:59.984 11:32:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:21:59.984 11:32:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:21:59.984 11:32:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:21:59.984 11:32:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:21:59.984 11:32:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:21:59.984 11:32:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:21:59.984 11:32:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:21:59.984 11:32:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:21:59.984 11:32:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:21:59.984 11:32:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:21:59.984 11:32:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:21:59.984 11:32:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:21:59.984 11:32:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:21:59.984 11:32:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:21:59.984 11:32:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:21:59.984 11:32:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:21:59.984 11:32:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:21:59.984 11:32:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:21:59.984 11:32:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:21:59.984 11:32:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:21:59.984 11:32:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:21:59.984 11:32:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:21:59.984 11:32:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:21:59.984 11:32:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:21:59.984 11:32:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:21:59.984 11:32:43 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:21:59.984 11:32:43 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount ]] 00:21:59.984 11:32:43 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:21:59.984 11:32:43 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:21:59.984 11:32:43 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:21:59.984 11:32:43 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:21:59.984 11:32:43 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:21:59.984 11:32:43 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:21:59.984 11:32:43 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:21:59.984 11:32:43 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:21:59.984 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:21:59.984 11:32:43 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:21:59.984 11:32:43 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:22:00.243 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:22:00.243 /dev/nvme0n1: 8 bytes were erased at offset 0x74702555e00 (gpt): 45 46 49 20 50 41 52 54 00:22:00.243 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:22:00.243 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:22:00.243 11:32:43 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:22:00.243 11:32:43 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:22:00.243 11:32:43 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:22:00.243 11:32:43 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:22:00.243 11:32:43 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:22:00.243 11:32:44 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:22:00.243 11:32:44 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:5e:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:22:00.243 11:32:44 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:22:00.243 11:32:44 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:22:00.243 11:32:44 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:22:00.243 11:32:44 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:22:00.243 11:32:44 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:22:00.243 11:32:44 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:22:00.243 11:32:44 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:22:00.243 11:32:44 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:22:00.243 11:32:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:22:00.243 11:32:44 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:22:00.243 11:32:44 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:22:00.243 11:32:44 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:22:00.243 11:32:44 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:22:03.530 11:32:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:22:03.530 11:32:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:22:03.530 11:32:47 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:22:03.530 11:32:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:22:03.530 11:32:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:22:03.530 11:32:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:22:03.530 11:32:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:22:03.530 11:32:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:22:03.530 11:32:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:22:03.530 11:32:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:22:03.530 11:32:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:22:03.530 11:32:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:22:03.530 11:32:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:22:03.530 11:32:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:22:03.530 11:32:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:22:03.530 11:32:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:22:03.530 11:32:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:22:03.530 11:32:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:22:03.530 11:32:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:22:03.530 11:32:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:22:03.530 11:32:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:22:03.530 11:32:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:22:03.530 11:32:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:22:03.530 11:32:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:22:03.530 11:32:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:22:03.530 11:32:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:22:03.530 11:32:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:22:03.530 11:32:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:22:03.530 11:32:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:22:03.530 11:32:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:22:03.530 11:32:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:22:03.530 11:32:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:22:03.530 11:32:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:22:03.530 11:32:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:22:03.530 11:32:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:22:03.530 11:32:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:22:03.790 11:32:47 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:22:03.790 11:32:47 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount ]] 00:22:03.790 11:32:47 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:22:03.790 11:32:47 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:22:03.790 11:32:47 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:22:03.790 11:32:47 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:22:03.790 11:32:47 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:5e:00.0 data@nvme0n1 '' '' 00:22:03.790 11:32:47 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:22:03.790 11:32:47 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:22:03.790 11:32:47 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:22:03.790 11:32:47 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:22:03.790 11:32:47 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:22:03.790 11:32:47 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:22:03.790 11:32:47 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:22:03.790 11:32:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:22:03.790 11:32:47 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:22:03.790 11:32:47 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:22:03.790 11:32:47 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:22:03.790 11:32:47 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:22:07.076 11:32:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:22:07.076 11:32:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:22:07.076 11:32:50 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:22:07.076 11:32:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:22:07.076 11:32:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:22:07.076 11:32:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:22:07.076 11:32:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:22:07.076 11:32:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:22:07.076 11:32:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:22:07.076 11:32:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:22:07.076 11:32:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:22:07.076 11:32:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:22:07.076 11:32:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:22:07.076 11:32:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:22:07.076 11:32:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:22:07.076 11:32:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:22:07.076 11:32:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:22:07.076 11:32:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:22:07.076 11:32:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:22:07.076 11:32:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:22:07.076 11:32:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:22:07.076 11:32:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:22:07.076 11:32:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:22:07.076 11:32:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:22:07.076 11:32:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:22:07.076 11:32:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:22:07.076 11:32:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:22:07.076 11:32:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:22:07.076 11:32:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:22:07.076 11:32:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:22:07.076 11:32:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:22:07.076 11:32:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:22:07.076 11:32:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:22:07.076 11:32:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:22:07.076 11:32:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:22:07.076 11:32:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:22:07.334 11:32:51 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:22:07.334 11:32:51 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:22:07.334 11:32:51 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:22:07.334 11:32:51 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:22:07.334 11:32:51 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:22:07.334 11:32:51 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:22:07.334 11:32:51 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:22:07.334 11:32:51 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:22:07.334 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:22:07.334 00:22:07.334 real 0m12.684s 00:22:07.334 user 0m3.557s 00:22:07.334 sys 0m6.885s 00:22:07.334 11:32:51 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1125 -- # xtrace_disable 00:22:07.334 11:32:51 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:22:07.334 ************************************ 00:22:07.334 END TEST nvme_mount 00:22:07.334 ************************************ 00:22:07.334 11:32:51 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:22:07.334 11:32:51 setup.sh.devices -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:22:07.334 11:32:51 setup.sh.devices -- common/autotest_common.sh@1106 -- # xtrace_disable 00:22:07.334 11:32:51 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:22:07.334 ************************************ 00:22:07.334 START TEST dm_mount 00:22:07.334 ************************************ 00:22:07.334 11:32:51 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # dm_mount 00:22:07.334 11:32:51 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:22:07.334 11:32:51 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:22:07.334 11:32:51 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:22:07.334 11:32:51 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:22:07.334 11:32:51 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:22:07.334 11:32:51 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:22:07.334 11:32:51 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:22:07.334 11:32:51 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:22:07.334 11:32:51 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:22:07.334 11:32:51 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:22:07.334 11:32:51 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:22:07.334 11:32:51 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:22:07.334 11:32:51 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:22:07.334 11:32:51 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:22:07.334 11:32:51 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:22:07.334 11:32:51 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:22:07.334 11:32:51 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:22:07.334 11:32:51 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:22:07.334 11:32:51 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:22:07.334 11:32:51 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:22:07.334 11:32:51 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:22:08.382 Creating new GPT entries in memory. 00:22:08.383 GPT data structures destroyed! You may now partition the disk using fdisk or 00:22:08.383 other utilities. 00:22:08.383 11:32:52 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:22:08.383 11:32:52 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:22:08.383 11:32:52 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:22:08.383 11:32:52 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:22:08.383 11:32:52 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:22:09.327 Creating new GPT entries in memory. 00:22:09.327 The operation has completed successfully. 00:22:09.327 11:32:53 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:22:09.327 11:32:53 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:22:09.327 11:32:53 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:22:09.327 11:32:53 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:22:09.327 11:32:53 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:22:10.264 The operation has completed successfully. 00:22:10.523 11:32:54 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:22:10.523 11:32:54 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:22:10.523 11:32:54 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 3298958 00:22:10.523 11:32:54 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:22:10.523 11:32:54 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:22:10.523 11:32:54 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:22:10.523 11:32:54 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:22:10.523 11:32:54 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:22:10.523 11:32:54 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:22:10.523 11:32:54 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:22:10.523 11:32:54 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:22:10.523 11:32:54 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:22:10.523 11:32:54 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:22:10.523 11:32:54 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:22:10.523 11:32:54 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:22:10.523 11:32:54 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:22:10.523 11:32:54 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:22:10.523 11:32:54 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount size= 00:22:10.523 11:32:54 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:22:10.523 11:32:54 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:22:10.523 11:32:54 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:22:10.523 11:32:54 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:22:10.523 11:32:54 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:5e:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:22:10.523 11:32:54 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:22:10.523 11:32:54 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:22:10.523 11:32:54 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:22:10.523 11:32:54 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:22:10.523 11:32:54 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:22:10.523 11:32:54 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:22:10.523 11:32:54 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:22:10.523 11:32:54 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:22:10.523 11:32:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:22:10.523 11:32:54 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:22:10.523 11:32:54 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:22:10.523 11:32:54 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:22:10.523 11:32:54 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:22:13.811 11:32:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:22:13.811 11:32:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:22:13.811 11:32:57 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:22:13.811 11:32:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:22:13.811 11:32:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:22:13.811 11:32:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:22:13.812 11:32:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:22:13.812 11:32:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:22:13.812 11:32:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:22:13.812 11:32:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:22:13.812 11:32:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:22:13.812 11:32:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:22:13.812 11:32:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:22:13.812 11:32:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:22:13.812 11:32:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:22:13.812 11:32:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:22:13.812 11:32:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:22:13.812 11:32:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:22:13.812 11:32:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:22:13.812 11:32:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:22:13.812 11:32:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:22:13.812 11:32:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:22:13.812 11:32:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:22:13.812 11:32:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:22:13.812 11:32:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:22:13.812 11:32:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:22:13.812 11:32:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:22:13.812 11:32:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:22:13.812 11:32:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:22:13.812 11:32:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:22:13.812 11:32:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:22:13.812 11:32:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:22:13.812 11:32:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:22:13.812 11:32:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:22:13.812 11:32:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:22:13.812 11:32:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:22:13.812 11:32:57 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:22:13.812 11:32:57 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount ]] 00:22:13.812 11:32:57 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:22:13.812 11:32:57 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:22:13.812 11:32:57 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:22:13.812 11:32:57 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:22:13.812 11:32:57 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:5e:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:22:13.812 11:32:57 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:22:13.812 11:32:57 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:22:13.812 11:32:57 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:22:13.812 11:32:57 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:22:13.812 11:32:57 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:22:13.812 11:32:57 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:22:13.812 11:32:57 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:22:13.812 11:32:57 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:22:13.812 11:32:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:22:13.812 11:32:57 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:22:13.812 11:32:57 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:22:13.812 11:32:57 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:22:17.098 11:33:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:22:17.098 11:33:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:22:17.098 11:33:01 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:22:17.098 11:33:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:22:17.098 11:33:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:22:17.098 11:33:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:22:17.098 11:33:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:22:17.098 11:33:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:22:17.098 11:33:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:22:17.098 11:33:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:22:17.098 11:33:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:22:17.098 11:33:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:22:17.098 11:33:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:22:17.098 11:33:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:22:17.098 11:33:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:22:17.098 11:33:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:22:17.098 11:33:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:22:17.098 11:33:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:22:17.098 11:33:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:22:17.098 11:33:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:22:17.098 11:33:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:22:17.098 11:33:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:22:17.098 11:33:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:22:17.098 11:33:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:22:17.098 11:33:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:22:17.098 11:33:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:22:17.098 11:33:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:22:17.098 11:33:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:22:17.098 11:33:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:22:17.098 11:33:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:22:17.098 11:33:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:22:17.098 11:33:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:22:17.098 11:33:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:22:17.098 11:33:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:22:17.098 11:33:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:22:17.098 11:33:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:22:17.357 11:33:01 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:22:17.357 11:33:01 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:22:17.357 11:33:01 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:22:17.357 11:33:01 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:22:17.357 11:33:01 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:22:17.357 11:33:01 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:22:17.357 11:33:01 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:22:17.357 11:33:01 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:22:17.357 11:33:01 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:22:17.357 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:22:17.357 11:33:01 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:22:17.357 11:33:01 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:22:17.357 00:22:17.357 real 0m10.082s 00:22:17.357 user 0m2.552s 00:22:17.357 sys 0m4.617s 00:22:17.357 11:33:01 setup.sh.devices.dm_mount -- common/autotest_common.sh@1125 -- # xtrace_disable 00:22:17.357 11:33:01 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:22:17.357 ************************************ 00:22:17.357 END TEST dm_mount 00:22:17.357 ************************************ 00:22:17.357 11:33:01 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:22:17.357 11:33:01 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:22:17.357 11:33:01 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:22:17.357 11:33:01 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:22:17.357 11:33:01 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:22:17.357 11:33:01 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:22:17.357 11:33:01 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:22:17.614 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:22:17.614 /dev/nvme0n1: 8 bytes were erased at offset 0x74702555e00 (gpt): 45 46 49 20 50 41 52 54 00:22:17.614 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:22:17.614 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:22:17.614 11:33:01 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:22:17.614 11:33:01 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:22:17.614 11:33:01 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:22:17.614 11:33:01 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:22:17.614 11:33:01 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:22:17.614 11:33:01 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:22:17.614 11:33:01 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:22:17.871 00:22:17.871 real 0m27.248s 00:22:17.871 user 0m7.599s 00:22:17.871 sys 0m14.395s 00:22:17.871 11:33:01 setup.sh.devices -- common/autotest_common.sh@1125 -- # xtrace_disable 00:22:17.871 11:33:01 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:22:17.871 ************************************ 00:22:17.871 END TEST devices 00:22:17.871 ************************************ 00:22:17.871 00:22:17.871 real 1m44.396s 00:22:17.871 user 0m29.059s 00:22:17.871 sys 0m54.275s 00:22:17.871 11:33:01 setup.sh -- common/autotest_common.sh@1125 -- # xtrace_disable 00:22:17.871 11:33:01 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:22:17.871 ************************************ 00:22:17.871 END TEST setup.sh 00:22:17.871 ************************************ 00:22:17.871 11:33:01 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh status 00:22:21.152 Hugepages 00:22:21.152 node hugesize free / total 00:22:21.152 node0 1048576kB 0 / 0 00:22:21.152 node0 2048kB 2048 / 2048 00:22:21.152 node1 1048576kB 0 / 0 00:22:21.152 node1 2048kB 0 / 0 00:22:21.152 00:22:21.152 Type BDF Vendor Device NUMA Driver Device Block devices 00:22:21.152 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:22:21.152 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:22:21.152 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:22:21.152 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:22:21.152 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:22:21.152 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:22:21.152 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:22:21.152 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:22:21.152 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:22:21.152 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:22:21.152 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:22:21.153 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:22:21.153 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:22:21.153 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:22:21.153 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:22:21.153 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:22:21.153 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:22:21.153 11:33:04 -- spdk/autotest.sh@130 -- # uname -s 00:22:21.153 11:33:04 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:22:21.153 11:33:04 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:22:21.153 11:33:04 -- common/autotest_common.sh@1530 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:22:24.445 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:22:24.445 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:22:24.445 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:22:24.445 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:22:24.445 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:22:24.445 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:22:24.445 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:22:24.445 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:22:24.445 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:22:24.445 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:22:24.445 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:22:24.445 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:22:24.445 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:22:24.445 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:22:24.445 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:22:24.445 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:22:29.722 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:22:29.722 11:33:13 -- common/autotest_common.sh@1531 -- # sleep 1 00:22:30.290 11:33:14 -- common/autotest_common.sh@1532 -- # bdfs=() 00:22:30.290 11:33:14 -- common/autotest_common.sh@1532 -- # local bdfs 00:22:30.290 11:33:14 -- common/autotest_common.sh@1533 -- # bdfs=($(get_nvme_bdfs)) 00:22:30.290 11:33:14 -- common/autotest_common.sh@1533 -- # get_nvme_bdfs 00:22:30.290 11:33:14 -- common/autotest_common.sh@1512 -- # bdfs=() 00:22:30.290 11:33:14 -- common/autotest_common.sh@1512 -- # local bdfs 00:22:30.290 11:33:14 -- common/autotest_common.sh@1513 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:22:30.290 11:33:14 -- common/autotest_common.sh@1513 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/gen_nvme.sh 00:22:30.290 11:33:14 -- common/autotest_common.sh@1513 -- # jq -r '.config[].params.traddr' 00:22:30.290 11:33:14 -- common/autotest_common.sh@1514 -- # (( 1 == 0 )) 00:22:30.290 11:33:14 -- common/autotest_common.sh@1518 -- # printf '%s\n' 0000:5e:00.0 00:22:30.290 11:33:14 -- common/autotest_common.sh@1535 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:22:33.581 Waiting for block devices as requested 00:22:33.840 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:22:33.840 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:22:33.840 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:22:34.099 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:22:34.099 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:22:34.099 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:22:34.358 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:22:34.358 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:22:34.358 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:22:34.358 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:22:34.616 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:22:34.616 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:22:34.876 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:22:34.876 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:22:34.876 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:22:35.135 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:22:35.135 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:22:35.135 11:33:18 -- common/autotest_common.sh@1537 -- # for bdf in "${bdfs[@]}" 00:22:35.135 11:33:18 -- common/autotest_common.sh@1538 -- # get_nvme_ctrlr_from_bdf 0000:5e:00.0 00:22:35.135 11:33:18 -- common/autotest_common.sh@1501 -- # readlink -f /sys/class/nvme/nvme0 00:22:35.135 11:33:18 -- common/autotest_common.sh@1501 -- # grep 0000:5e:00.0/nvme/nvme 00:22:35.135 11:33:18 -- common/autotest_common.sh@1501 -- # bdf_sysfs_path=/sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:22:35.135 11:33:18 -- common/autotest_common.sh@1502 -- # [[ -z /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 ]] 00:22:35.135 11:33:18 -- common/autotest_common.sh@1506 -- # basename /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:22:35.135 11:33:18 -- common/autotest_common.sh@1506 -- # printf '%s\n' nvme0 00:22:35.135 11:33:18 -- common/autotest_common.sh@1538 -- # nvme_ctrlr=/dev/nvme0 00:22:35.135 11:33:18 -- common/autotest_common.sh@1539 -- # [[ -z /dev/nvme0 ]] 00:22:35.135 11:33:18 -- common/autotest_common.sh@1544 -- # grep oacs 00:22:35.135 11:33:18 -- common/autotest_common.sh@1544 -- # nvme id-ctrl /dev/nvme0 00:22:35.135 11:33:18 -- common/autotest_common.sh@1544 -- # cut -d: -f2 00:22:35.135 11:33:18 -- common/autotest_common.sh@1544 -- # oacs=' 0xe' 00:22:35.135 11:33:18 -- common/autotest_common.sh@1545 -- # oacs_ns_manage=8 00:22:35.135 11:33:18 -- common/autotest_common.sh@1547 -- # [[ 8 -ne 0 ]] 00:22:35.135 11:33:18 -- common/autotest_common.sh@1553 -- # nvme id-ctrl /dev/nvme0 00:22:35.135 11:33:18 -- common/autotest_common.sh@1553 -- # grep unvmcap 00:22:35.135 11:33:18 -- common/autotest_common.sh@1553 -- # cut -d: -f2 00:22:35.135 11:33:19 -- common/autotest_common.sh@1553 -- # unvmcap=' 0' 00:22:35.135 11:33:19 -- common/autotest_common.sh@1554 -- # [[ 0 -eq 0 ]] 00:22:35.135 11:33:19 -- common/autotest_common.sh@1556 -- # continue 00:22:35.135 11:33:19 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:22:35.135 11:33:19 -- common/autotest_common.sh@729 -- # xtrace_disable 00:22:35.136 11:33:19 -- common/autotest_common.sh@10 -- # set +x 00:22:35.136 11:33:19 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:22:35.136 11:33:19 -- common/autotest_common.sh@723 -- # xtrace_disable 00:22:35.136 11:33:19 -- common/autotest_common.sh@10 -- # set +x 00:22:35.136 11:33:19 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:22:39.328 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:22:39.328 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:22:39.328 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:22:39.328 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:22:39.328 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:22:39.328 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:22:39.328 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:22:39.328 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:22:39.328 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:22:39.328 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:22:39.328 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:22:39.328 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:22:39.328 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:22:39.328 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:22:39.328 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:22:39.328 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:22:44.603 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:22:44.603 11:33:27 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:22:44.603 11:33:27 -- common/autotest_common.sh@729 -- # xtrace_disable 00:22:44.603 11:33:27 -- common/autotest_common.sh@10 -- # set +x 00:22:44.603 11:33:27 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:22:44.603 11:33:27 -- common/autotest_common.sh@1590 -- # mapfile -t bdfs 00:22:44.603 11:33:27 -- common/autotest_common.sh@1590 -- # get_nvme_bdfs_by_id 0x0a54 00:22:44.603 11:33:27 -- common/autotest_common.sh@1576 -- # bdfs=() 00:22:44.603 11:33:27 -- common/autotest_common.sh@1576 -- # local bdfs 00:22:44.603 11:33:27 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs 00:22:44.603 11:33:27 -- common/autotest_common.sh@1512 -- # bdfs=() 00:22:44.603 11:33:27 -- common/autotest_common.sh@1512 -- # local bdfs 00:22:44.603 11:33:27 -- common/autotest_common.sh@1513 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:22:44.603 11:33:27 -- common/autotest_common.sh@1513 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/gen_nvme.sh 00:22:44.603 11:33:27 -- common/autotest_common.sh@1513 -- # jq -r '.config[].params.traddr' 00:22:44.603 11:33:27 -- common/autotest_common.sh@1514 -- # (( 1 == 0 )) 00:22:44.603 11:33:27 -- common/autotest_common.sh@1518 -- # printf '%s\n' 0000:5e:00.0 00:22:44.603 11:33:27 -- common/autotest_common.sh@1578 -- # for bdf in $(get_nvme_bdfs) 00:22:44.603 11:33:27 -- common/autotest_common.sh@1579 -- # cat /sys/bus/pci/devices/0000:5e:00.0/device 00:22:44.603 11:33:27 -- common/autotest_common.sh@1579 -- # device=0x0a54 00:22:44.603 11:33:27 -- common/autotest_common.sh@1580 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:22:44.603 11:33:27 -- common/autotest_common.sh@1581 -- # bdfs+=($bdf) 00:22:44.603 11:33:27 -- common/autotest_common.sh@1585 -- # printf '%s\n' 0000:5e:00.0 00:22:44.603 11:33:27 -- common/autotest_common.sh@1591 -- # [[ -z 0000:5e:00.0 ]] 00:22:44.603 11:33:27 -- common/autotest_common.sh@1596 -- # spdk_tgt_pid=3308618 00:22:44.603 11:33:27 -- common/autotest_common.sh@1597 -- # waitforlisten 3308618 00:22:44.603 11:33:27 -- common/autotest_common.sh@1595 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:22:44.603 11:33:27 -- common/autotest_common.sh@830 -- # '[' -z 3308618 ']' 00:22:44.603 11:33:27 -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:44.603 11:33:27 -- common/autotest_common.sh@835 -- # local max_retries=100 00:22:44.603 11:33:27 -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:44.603 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:44.603 11:33:27 -- common/autotest_common.sh@839 -- # xtrace_disable 00:22:44.603 11:33:27 -- common/autotest_common.sh@10 -- # set +x 00:22:44.603 [2024-06-10 11:33:27.939607] Starting SPDK v24.09-pre git sha1 a3f6419f1 / DPDK 24.03.0 initialization... 00:22:44.603 [2024-06-10 11:33:27.939664] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3308618 ] 00:22:44.603 EAL: No free 2048 kB hugepages reported on node 1 00:22:44.603 [2024-06-10 11:33:28.016001] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:44.603 [2024-06-10 11:33:28.110570] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:22:44.862 11:33:28 -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:22:44.862 11:33:28 -- common/autotest_common.sh@863 -- # return 0 00:22:44.862 11:33:28 -- common/autotest_common.sh@1599 -- # bdf_id=0 00:22:44.862 11:33:28 -- common/autotest_common.sh@1600 -- # for bdf in "${bdfs[@]}" 00:22:44.862 11:33:28 -- common/autotest_common.sh@1601 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:5e:00.0 00:22:48.173 nvme0n1 00:22:48.173 11:33:31 -- common/autotest_common.sh@1603 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:22:48.173 [2024-06-10 11:33:31.923815] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:22:48.173 request: 00:22:48.173 { 00:22:48.173 "nvme_ctrlr_name": "nvme0", 00:22:48.173 "password": "test", 00:22:48.173 "method": "bdev_nvme_opal_revert", 00:22:48.173 "req_id": 1 00:22:48.173 } 00:22:48.173 Got JSON-RPC error response 00:22:48.173 response: 00:22:48.173 { 00:22:48.173 "code": -32602, 00:22:48.173 "message": "Invalid parameters" 00:22:48.173 } 00:22:48.173 11:33:31 -- common/autotest_common.sh@1603 -- # true 00:22:48.173 11:33:31 -- common/autotest_common.sh@1604 -- # (( ++bdf_id )) 00:22:48.173 11:33:31 -- common/autotest_common.sh@1607 -- # killprocess 3308618 00:22:48.173 11:33:31 -- common/autotest_common.sh@949 -- # '[' -z 3308618 ']' 00:22:48.173 11:33:31 -- common/autotest_common.sh@953 -- # kill -0 3308618 00:22:48.173 11:33:31 -- common/autotest_common.sh@954 -- # uname 00:22:48.173 11:33:31 -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:22:48.173 11:33:31 -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3308618 00:22:48.173 11:33:31 -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:22:48.173 11:33:31 -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:22:48.173 11:33:31 -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3308618' 00:22:48.173 killing process with pid 3308618 00:22:48.173 11:33:31 -- common/autotest_common.sh@968 -- # kill 3308618 00:22:48.173 11:33:31 -- common/autotest_common.sh@973 -- # wait 3308618 00:22:56.300 11:33:39 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:22:56.301 11:33:39 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:22:56.301 11:33:39 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:22:56.301 11:33:39 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:22:56.301 11:33:39 -- spdk/autotest.sh@162 -- # timing_enter lib 00:22:56.301 11:33:39 -- common/autotest_common.sh@723 -- # xtrace_disable 00:22:56.301 11:33:39 -- common/autotest_common.sh@10 -- # set +x 00:22:56.301 11:33:39 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:22:56.301 11:33:39 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env.sh 00:22:56.301 11:33:39 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:22:56.301 11:33:39 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:22:56.301 11:33:39 -- common/autotest_common.sh@10 -- # set +x 00:22:56.301 ************************************ 00:22:56.301 START TEST env 00:22:56.301 ************************************ 00:22:56.301 11:33:39 env -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env.sh 00:22:56.301 * Looking for test storage... 00:22:56.301 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env 00:22:56.301 11:33:39 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/memory/memory_ut 00:22:56.301 11:33:39 env -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:22:56.301 11:33:39 env -- common/autotest_common.sh@1106 -- # xtrace_disable 00:22:56.301 11:33:39 env -- common/autotest_common.sh@10 -- # set +x 00:22:56.301 ************************************ 00:22:56.301 START TEST env_memory 00:22:56.301 ************************************ 00:22:56.301 11:33:39 env.env_memory -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/memory/memory_ut 00:22:56.301 00:22:56.301 00:22:56.301 CUnit - A unit testing framework for C - Version 2.1-3 00:22:56.301 http://cunit.sourceforge.net/ 00:22:56.301 00:22:56.301 00:22:56.301 Suite: memory 00:22:56.301 Test: alloc and free memory map ...[2024-06-10 11:33:39.297186] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:22:56.301 passed 00:22:56.301 Test: mem map translation ...[2024-06-10 11:33:39.310070] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 591:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:22:56.301 [2024-06-10 11:33:39.310089] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 591:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:22:56.301 [2024-06-10 11:33:39.310122] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:22:56.301 [2024-06-10 11:33:39.310132] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:22:56.301 passed 00:22:56.301 Test: mem map registration ...[2024-06-10 11:33:39.331320] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:22:56.301 [2024-06-10 11:33:39.331346] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:22:56.301 passed 00:22:56.301 Test: mem map adjacent registrations ...passed 00:22:56.301 00:22:56.301 Run Summary: Type Total Ran Passed Failed Inactive 00:22:56.301 suites 1 1 n/a 0 0 00:22:56.301 tests 4 4 4 0 0 00:22:56.301 asserts 152 152 152 0 n/a 00:22:56.301 00:22:56.301 Elapsed time = 0.087 seconds 00:22:56.301 00:22:56.301 real 0m0.101s 00:22:56.301 user 0m0.089s 00:22:56.301 sys 0m0.012s 00:22:56.301 11:33:39 env.env_memory -- common/autotest_common.sh@1125 -- # xtrace_disable 00:22:56.301 11:33:39 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:22:56.301 ************************************ 00:22:56.301 END TEST env_memory 00:22:56.301 ************************************ 00:22:56.301 11:33:39 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/vtophys/vtophys 00:22:56.301 11:33:39 env -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:22:56.301 11:33:39 env -- common/autotest_common.sh@1106 -- # xtrace_disable 00:22:56.301 11:33:39 env -- common/autotest_common.sh@10 -- # set +x 00:22:56.301 ************************************ 00:22:56.301 START TEST env_vtophys 00:22:56.301 ************************************ 00:22:56.301 11:33:39 env.env_vtophys -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/vtophys/vtophys 00:22:56.301 EAL: lib.eal log level changed from notice to debug 00:22:56.301 EAL: Detected lcore 0 as core 0 on socket 0 00:22:56.301 EAL: Detected lcore 1 as core 1 on socket 0 00:22:56.301 EAL: Detected lcore 2 as core 2 on socket 0 00:22:56.301 EAL: Detected lcore 3 as core 3 on socket 0 00:22:56.301 EAL: Detected lcore 4 as core 4 on socket 0 00:22:56.301 EAL: Detected lcore 5 as core 8 on socket 0 00:22:56.301 EAL: Detected lcore 6 as core 9 on socket 0 00:22:56.301 EAL: Detected lcore 7 as core 10 on socket 0 00:22:56.301 EAL: Detected lcore 8 as core 11 on socket 0 00:22:56.301 EAL: Detected lcore 9 as core 16 on socket 0 00:22:56.301 EAL: Detected lcore 10 as core 17 on socket 0 00:22:56.301 EAL: Detected lcore 11 as core 18 on socket 0 00:22:56.301 EAL: Detected lcore 12 as core 19 on socket 0 00:22:56.301 EAL: Detected lcore 13 as core 20 on socket 0 00:22:56.301 EAL: Detected lcore 14 as core 24 on socket 0 00:22:56.301 EAL: Detected lcore 15 as core 25 on socket 0 00:22:56.301 EAL: Detected lcore 16 as core 26 on socket 0 00:22:56.301 EAL: Detected lcore 17 as core 27 on socket 0 00:22:56.301 EAL: Detected lcore 18 as core 0 on socket 1 00:22:56.301 EAL: Detected lcore 19 as core 1 on socket 1 00:22:56.301 EAL: Detected lcore 20 as core 2 on socket 1 00:22:56.301 EAL: Detected lcore 21 as core 3 on socket 1 00:22:56.301 EAL: Detected lcore 22 as core 4 on socket 1 00:22:56.301 EAL: Detected lcore 23 as core 8 on socket 1 00:22:56.301 EAL: Detected lcore 24 as core 9 on socket 1 00:22:56.301 EAL: Detected lcore 25 as core 10 on socket 1 00:22:56.301 EAL: Detected lcore 26 as core 11 on socket 1 00:22:56.301 EAL: Detected lcore 27 as core 16 on socket 1 00:22:56.301 EAL: Detected lcore 28 as core 17 on socket 1 00:22:56.301 EAL: Detected lcore 29 as core 18 on socket 1 00:22:56.301 EAL: Detected lcore 30 as core 19 on socket 1 00:22:56.301 EAL: Detected lcore 31 as core 20 on socket 1 00:22:56.301 EAL: Detected lcore 32 as core 24 on socket 1 00:22:56.301 EAL: Detected lcore 33 as core 25 on socket 1 00:22:56.301 EAL: Detected lcore 34 as core 26 on socket 1 00:22:56.301 EAL: Detected lcore 35 as core 27 on socket 1 00:22:56.301 EAL: Detected lcore 36 as core 0 on socket 0 00:22:56.301 EAL: Detected lcore 37 as core 1 on socket 0 00:22:56.301 EAL: Detected lcore 38 as core 2 on socket 0 00:22:56.301 EAL: Detected lcore 39 as core 3 on socket 0 00:22:56.301 EAL: Detected lcore 40 as core 4 on socket 0 00:22:56.301 EAL: Detected lcore 41 as core 8 on socket 0 00:22:56.301 EAL: Detected lcore 42 as core 9 on socket 0 00:22:56.301 EAL: Detected lcore 43 as core 10 on socket 0 00:22:56.301 EAL: Detected lcore 44 as core 11 on socket 0 00:22:56.301 EAL: Detected lcore 45 as core 16 on socket 0 00:22:56.301 EAL: Detected lcore 46 as core 17 on socket 0 00:22:56.301 EAL: Detected lcore 47 as core 18 on socket 0 00:22:56.301 EAL: Detected lcore 48 as core 19 on socket 0 00:22:56.301 EAL: Detected lcore 49 as core 20 on socket 0 00:22:56.301 EAL: Detected lcore 50 as core 24 on socket 0 00:22:56.301 EAL: Detected lcore 51 as core 25 on socket 0 00:22:56.301 EAL: Detected lcore 52 as core 26 on socket 0 00:22:56.301 EAL: Detected lcore 53 as core 27 on socket 0 00:22:56.301 EAL: Detected lcore 54 as core 0 on socket 1 00:22:56.301 EAL: Detected lcore 55 as core 1 on socket 1 00:22:56.301 EAL: Detected lcore 56 as core 2 on socket 1 00:22:56.301 EAL: Detected lcore 57 as core 3 on socket 1 00:22:56.301 EAL: Detected lcore 58 as core 4 on socket 1 00:22:56.301 EAL: Detected lcore 59 as core 8 on socket 1 00:22:56.301 EAL: Detected lcore 60 as core 9 on socket 1 00:22:56.301 EAL: Detected lcore 61 as core 10 on socket 1 00:22:56.301 EAL: Detected lcore 62 as core 11 on socket 1 00:22:56.301 EAL: Detected lcore 63 as core 16 on socket 1 00:22:56.301 EAL: Detected lcore 64 as core 17 on socket 1 00:22:56.301 EAL: Detected lcore 65 as core 18 on socket 1 00:22:56.301 EAL: Detected lcore 66 as core 19 on socket 1 00:22:56.301 EAL: Detected lcore 67 as core 20 on socket 1 00:22:56.301 EAL: Detected lcore 68 as core 24 on socket 1 00:22:56.301 EAL: Detected lcore 69 as core 25 on socket 1 00:22:56.301 EAL: Detected lcore 70 as core 26 on socket 1 00:22:56.301 EAL: Detected lcore 71 as core 27 on socket 1 00:22:56.301 EAL: Maximum logical cores by configuration: 128 00:22:56.301 EAL: Detected CPU lcores: 72 00:22:56.301 EAL: Detected NUMA nodes: 2 00:22:56.301 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:22:56.301 EAL: Checking presence of .so 'librte_eal.so.24' 00:22:56.301 EAL: Checking presence of .so 'librte_eal.so' 00:22:56.301 EAL: Detected static linkage of DPDK 00:22:56.301 EAL: No shared files mode enabled, IPC will be disabled 00:22:56.301 EAL: Bus pci wants IOVA as 'DC' 00:22:56.301 EAL: Buses did not request a specific IOVA mode. 00:22:56.301 EAL: IOMMU is available, selecting IOVA as VA mode. 00:22:56.301 EAL: Selected IOVA mode 'VA' 00:22:56.301 EAL: No free 2048 kB hugepages reported on node 1 00:22:56.301 EAL: Probing VFIO support... 00:22:56.301 EAL: IOMMU type 1 (Type 1) is supported 00:22:56.301 EAL: IOMMU type 7 (sPAPR) is not supported 00:22:56.301 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:22:56.301 EAL: VFIO support initialized 00:22:56.301 EAL: Ask a virtual area of 0x2e000 bytes 00:22:56.301 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:22:56.301 EAL: Setting up physically contiguous memory... 00:22:56.301 EAL: Setting maximum number of open files to 524288 00:22:56.301 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:22:56.301 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:22:56.301 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:22:56.301 EAL: Ask a virtual area of 0x61000 bytes 00:22:56.301 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:22:56.301 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:22:56.301 EAL: Ask a virtual area of 0x400000000 bytes 00:22:56.301 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:22:56.301 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:22:56.302 EAL: Ask a virtual area of 0x61000 bytes 00:22:56.302 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:22:56.302 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:22:56.302 EAL: Ask a virtual area of 0x400000000 bytes 00:22:56.302 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:22:56.302 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:22:56.302 EAL: Ask a virtual area of 0x61000 bytes 00:22:56.302 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:22:56.302 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:22:56.302 EAL: Ask a virtual area of 0x400000000 bytes 00:22:56.302 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:22:56.302 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:22:56.302 EAL: Ask a virtual area of 0x61000 bytes 00:22:56.302 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:22:56.302 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:22:56.302 EAL: Ask a virtual area of 0x400000000 bytes 00:22:56.302 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:22:56.302 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:22:56.302 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:22:56.302 EAL: Ask a virtual area of 0x61000 bytes 00:22:56.302 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:22:56.302 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:22:56.302 EAL: Ask a virtual area of 0x400000000 bytes 00:22:56.302 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:22:56.302 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:22:56.302 EAL: Ask a virtual area of 0x61000 bytes 00:22:56.302 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:22:56.302 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:22:56.302 EAL: Ask a virtual area of 0x400000000 bytes 00:22:56.302 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:22:56.302 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:22:56.302 EAL: Ask a virtual area of 0x61000 bytes 00:22:56.302 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:22:56.302 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:22:56.302 EAL: Ask a virtual area of 0x400000000 bytes 00:22:56.302 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:22:56.302 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:22:56.302 EAL: Ask a virtual area of 0x61000 bytes 00:22:56.302 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:22:56.302 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:22:56.302 EAL: Ask a virtual area of 0x400000000 bytes 00:22:56.302 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:22:56.302 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:22:56.302 EAL: Hugepages will be freed exactly as allocated. 00:22:56.302 EAL: No shared files mode enabled, IPC is disabled 00:22:56.302 EAL: No shared files mode enabled, IPC is disabled 00:22:56.302 EAL: TSC frequency is ~2300000 KHz 00:22:56.302 EAL: Main lcore 0 is ready (tid=7f8a55cb2a00;cpuset=[0]) 00:22:56.302 EAL: Trying to obtain current memory policy. 00:22:56.302 EAL: Setting policy MPOL_PREFERRED for socket 0 00:22:56.302 EAL: Restoring previous memory policy: 0 00:22:56.302 EAL: request: mp_malloc_sync 00:22:56.302 EAL: No shared files mode enabled, IPC is disabled 00:22:56.302 EAL: Heap on socket 0 was expanded by 2MB 00:22:56.302 EAL: No shared files mode enabled, IPC is disabled 00:22:56.302 EAL: Mem event callback 'spdk:(nil)' registered 00:22:56.302 00:22:56.302 00:22:56.302 CUnit - A unit testing framework for C - Version 2.1-3 00:22:56.302 http://cunit.sourceforge.net/ 00:22:56.302 00:22:56.302 00:22:56.302 Suite: components_suite 00:22:56.302 Test: vtophys_malloc_test ...passed 00:22:56.302 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:22:56.302 EAL: Setting policy MPOL_PREFERRED for socket 0 00:22:56.302 EAL: Restoring previous memory policy: 4 00:22:56.302 EAL: Calling mem event callback 'spdk:(nil)' 00:22:56.302 EAL: request: mp_malloc_sync 00:22:56.302 EAL: No shared files mode enabled, IPC is disabled 00:22:56.302 EAL: Heap on socket 0 was expanded by 4MB 00:22:56.302 EAL: Calling mem event callback 'spdk:(nil)' 00:22:56.302 EAL: request: mp_malloc_sync 00:22:56.302 EAL: No shared files mode enabled, IPC is disabled 00:22:56.302 EAL: Heap on socket 0 was shrunk by 4MB 00:22:56.302 EAL: Trying to obtain current memory policy. 00:22:56.302 EAL: Setting policy MPOL_PREFERRED for socket 0 00:22:56.302 EAL: Restoring previous memory policy: 4 00:22:56.302 EAL: Calling mem event callback 'spdk:(nil)' 00:22:56.302 EAL: request: mp_malloc_sync 00:22:56.302 EAL: No shared files mode enabled, IPC is disabled 00:22:56.302 EAL: Heap on socket 0 was expanded by 6MB 00:22:56.302 EAL: Calling mem event callback 'spdk:(nil)' 00:22:56.302 EAL: request: mp_malloc_sync 00:22:56.302 EAL: No shared files mode enabled, IPC is disabled 00:22:56.302 EAL: Heap on socket 0 was shrunk by 6MB 00:22:56.302 EAL: Trying to obtain current memory policy. 00:22:56.302 EAL: Setting policy MPOL_PREFERRED for socket 0 00:22:56.302 EAL: Restoring previous memory policy: 4 00:22:56.302 EAL: Calling mem event callback 'spdk:(nil)' 00:22:56.302 EAL: request: mp_malloc_sync 00:22:56.302 EAL: No shared files mode enabled, IPC is disabled 00:22:56.302 EAL: Heap on socket 0 was expanded by 10MB 00:22:56.302 EAL: Calling mem event callback 'spdk:(nil)' 00:22:56.302 EAL: request: mp_malloc_sync 00:22:56.302 EAL: No shared files mode enabled, IPC is disabled 00:22:56.302 EAL: Heap on socket 0 was shrunk by 10MB 00:22:56.302 EAL: Trying to obtain current memory policy. 00:22:56.302 EAL: Setting policy MPOL_PREFERRED for socket 0 00:22:56.302 EAL: Restoring previous memory policy: 4 00:22:56.302 EAL: Calling mem event callback 'spdk:(nil)' 00:22:56.302 EAL: request: mp_malloc_sync 00:22:56.302 EAL: No shared files mode enabled, IPC is disabled 00:22:56.302 EAL: Heap on socket 0 was expanded by 18MB 00:22:56.302 EAL: Calling mem event callback 'spdk:(nil)' 00:22:56.302 EAL: request: mp_malloc_sync 00:22:56.302 EAL: No shared files mode enabled, IPC is disabled 00:22:56.302 EAL: Heap on socket 0 was shrunk by 18MB 00:22:56.302 EAL: Trying to obtain current memory policy. 00:22:56.302 EAL: Setting policy MPOL_PREFERRED for socket 0 00:22:56.302 EAL: Restoring previous memory policy: 4 00:22:56.302 EAL: Calling mem event callback 'spdk:(nil)' 00:22:56.302 EAL: request: mp_malloc_sync 00:22:56.302 EAL: No shared files mode enabled, IPC is disabled 00:22:56.302 EAL: Heap on socket 0 was expanded by 34MB 00:22:56.302 EAL: Calling mem event callback 'spdk:(nil)' 00:22:56.302 EAL: request: mp_malloc_sync 00:22:56.302 EAL: No shared files mode enabled, IPC is disabled 00:22:56.302 EAL: Heap on socket 0 was shrunk by 34MB 00:22:56.302 EAL: Trying to obtain current memory policy. 00:22:56.302 EAL: Setting policy MPOL_PREFERRED for socket 0 00:22:56.302 EAL: Restoring previous memory policy: 4 00:22:56.302 EAL: Calling mem event callback 'spdk:(nil)' 00:22:56.302 EAL: request: mp_malloc_sync 00:22:56.302 EAL: No shared files mode enabled, IPC is disabled 00:22:56.302 EAL: Heap on socket 0 was expanded by 66MB 00:22:56.302 EAL: Calling mem event callback 'spdk:(nil)' 00:22:56.302 EAL: request: mp_malloc_sync 00:22:56.302 EAL: No shared files mode enabled, IPC is disabled 00:22:56.302 EAL: Heap on socket 0 was shrunk by 66MB 00:22:56.302 EAL: Trying to obtain current memory policy. 00:22:56.302 EAL: Setting policy MPOL_PREFERRED for socket 0 00:22:56.302 EAL: Restoring previous memory policy: 4 00:22:56.302 EAL: Calling mem event callback 'spdk:(nil)' 00:22:56.302 EAL: request: mp_malloc_sync 00:22:56.302 EAL: No shared files mode enabled, IPC is disabled 00:22:56.302 EAL: Heap on socket 0 was expanded by 130MB 00:22:56.302 EAL: Calling mem event callback 'spdk:(nil)' 00:22:56.302 EAL: request: mp_malloc_sync 00:22:56.302 EAL: No shared files mode enabled, IPC is disabled 00:22:56.302 EAL: Heap on socket 0 was shrunk by 130MB 00:22:56.302 EAL: Trying to obtain current memory policy. 00:22:56.302 EAL: Setting policy MPOL_PREFERRED for socket 0 00:22:56.302 EAL: Restoring previous memory policy: 4 00:22:56.302 EAL: Calling mem event callback 'spdk:(nil)' 00:22:56.302 EAL: request: mp_malloc_sync 00:22:56.302 EAL: No shared files mode enabled, IPC is disabled 00:22:56.302 EAL: Heap on socket 0 was expanded by 258MB 00:22:56.302 EAL: Calling mem event callback 'spdk:(nil)' 00:22:56.302 EAL: request: mp_malloc_sync 00:22:56.302 EAL: No shared files mode enabled, IPC is disabled 00:22:56.302 EAL: Heap on socket 0 was shrunk by 258MB 00:22:56.302 EAL: Trying to obtain current memory policy. 00:22:56.302 EAL: Setting policy MPOL_PREFERRED for socket 0 00:22:56.302 EAL: Restoring previous memory policy: 4 00:22:56.302 EAL: Calling mem event callback 'spdk:(nil)' 00:22:56.302 EAL: request: mp_malloc_sync 00:22:56.302 EAL: No shared files mode enabled, IPC is disabled 00:22:56.302 EAL: Heap on socket 0 was expanded by 514MB 00:22:56.302 EAL: Calling mem event callback 'spdk:(nil)' 00:22:56.302 EAL: request: mp_malloc_sync 00:22:56.302 EAL: No shared files mode enabled, IPC is disabled 00:22:56.302 EAL: Heap on socket 0 was shrunk by 514MB 00:22:56.302 EAL: Trying to obtain current memory policy. 00:22:56.302 EAL: Setting policy MPOL_PREFERRED for socket 0 00:22:56.560 EAL: Restoring previous memory policy: 4 00:22:56.560 EAL: Calling mem event callback 'spdk:(nil)' 00:22:56.560 EAL: request: mp_malloc_sync 00:22:56.560 EAL: No shared files mode enabled, IPC is disabled 00:22:56.560 EAL: Heap on socket 0 was expanded by 1026MB 00:22:56.560 EAL: Calling mem event callback 'spdk:(nil)' 00:22:56.818 EAL: request: mp_malloc_sync 00:22:56.818 EAL: No shared files mode enabled, IPC is disabled 00:22:56.818 EAL: Heap on socket 0 was shrunk by 1026MB 00:22:56.818 passed 00:22:56.818 00:22:56.818 Run Summary: Type Total Ran Passed Failed Inactive 00:22:56.818 suites 1 1 n/a 0 0 00:22:56.818 tests 2 2 2 0 0 00:22:56.818 asserts 497 497 497 0 n/a 00:22:56.818 00:22:56.818 Elapsed time = 1.112 seconds 00:22:56.818 EAL: Calling mem event callback 'spdk:(nil)' 00:22:56.818 EAL: request: mp_malloc_sync 00:22:56.818 EAL: No shared files mode enabled, IPC is disabled 00:22:56.818 EAL: Heap on socket 0 was shrunk by 2MB 00:22:56.818 EAL: No shared files mode enabled, IPC is disabled 00:22:56.818 EAL: No shared files mode enabled, IPC is disabled 00:22:56.818 EAL: No shared files mode enabled, IPC is disabled 00:22:56.818 00:22:56.818 real 0m1.241s 00:22:56.818 user 0m0.709s 00:22:56.818 sys 0m0.506s 00:22:56.818 11:33:40 env.env_vtophys -- common/autotest_common.sh@1125 -- # xtrace_disable 00:22:56.818 11:33:40 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:22:56.818 ************************************ 00:22:56.818 END TEST env_vtophys 00:22:56.818 ************************************ 00:22:56.818 11:33:40 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/pci/pci_ut 00:22:56.818 11:33:40 env -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:22:56.818 11:33:40 env -- common/autotest_common.sh@1106 -- # xtrace_disable 00:22:56.819 11:33:40 env -- common/autotest_common.sh@10 -- # set +x 00:22:56.819 ************************************ 00:22:56.819 START TEST env_pci 00:22:56.819 ************************************ 00:22:56.819 11:33:40 env.env_pci -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/pci/pci_ut 00:22:56.819 00:22:56.819 00:22:56.819 CUnit - A unit testing framework for C - Version 2.1-3 00:22:56.819 http://cunit.sourceforge.net/ 00:22:56.819 00:22:56.819 00:22:56.819 Suite: pci 00:22:56.819 Test: pci_hook ...[2024-06-10 11:33:40.763657] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/pci.c:1041:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 3310363 has claimed it 00:22:57.077 EAL: Cannot find device (10000:00:01.0) 00:22:57.077 EAL: Failed to attach device on primary process 00:22:57.077 passed 00:22:57.077 00:22:57.077 Run Summary: Type Total Ran Passed Failed Inactive 00:22:57.077 suites 1 1 n/a 0 0 00:22:57.077 tests 1 1 1 0 0 00:22:57.077 asserts 25 25 25 0 n/a 00:22:57.077 00:22:57.077 Elapsed time = 0.037 seconds 00:22:57.077 00:22:57.077 real 0m0.057s 00:22:57.077 user 0m0.017s 00:22:57.077 sys 0m0.040s 00:22:57.077 11:33:40 env.env_pci -- common/autotest_common.sh@1125 -- # xtrace_disable 00:22:57.077 11:33:40 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:22:57.077 ************************************ 00:22:57.077 END TEST env_pci 00:22:57.077 ************************************ 00:22:57.077 11:33:40 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:22:57.077 11:33:40 env -- env/env.sh@15 -- # uname 00:22:57.077 11:33:40 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:22:57.077 11:33:40 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:22:57.077 11:33:40 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:22:57.077 11:33:40 env -- common/autotest_common.sh@1100 -- # '[' 5 -le 1 ']' 00:22:57.077 11:33:40 env -- common/autotest_common.sh@1106 -- # xtrace_disable 00:22:57.077 11:33:40 env -- common/autotest_common.sh@10 -- # set +x 00:22:57.077 ************************************ 00:22:57.077 START TEST env_dpdk_post_init 00:22:57.077 ************************************ 00:22:57.077 11:33:40 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:22:57.077 EAL: Detected CPU lcores: 72 00:22:57.077 EAL: Detected NUMA nodes: 2 00:22:57.077 EAL: Detected static linkage of DPDK 00:22:57.077 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:22:57.077 EAL: Selected IOVA mode 'VA' 00:22:57.077 EAL: No free 2048 kB hugepages reported on node 1 00:22:57.077 EAL: VFIO support initialized 00:22:57.077 TELEMETRY: No legacy callbacks, legacy socket not created 00:22:57.077 EAL: Using IOMMU type 1 (Type 1) 00:22:58.012 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:5e:00.0 (socket 0) 00:23:07.969 EAL: Releasing PCI mapped resource for 0000:5e:00.0 00:23:07.969 EAL: Calling pci_unmap_resource for 0000:5e:00.0 at 0x202001000000 00:23:07.969 Starting DPDK initialization... 00:23:07.969 Starting SPDK post initialization... 00:23:07.969 SPDK NVMe probe 00:23:07.969 Attaching to 0000:5e:00.0 00:23:07.969 Attached to 0000:5e:00.0 00:23:07.969 Cleaning up... 00:23:07.969 00:23:07.969 real 0m9.633s 00:23:07.969 user 0m7.407s 00:23:07.969 sys 0m1.464s 00:23:07.969 11:33:50 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # xtrace_disable 00:23:07.969 11:33:50 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:23:07.969 ************************************ 00:23:07.969 END TEST env_dpdk_post_init 00:23:07.969 ************************************ 00:23:07.969 11:33:50 env -- env/env.sh@26 -- # uname 00:23:07.969 11:33:50 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:23:07.969 11:33:50 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:23:07.969 11:33:50 env -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:23:07.969 11:33:50 env -- common/autotest_common.sh@1106 -- # xtrace_disable 00:23:07.970 11:33:50 env -- common/autotest_common.sh@10 -- # set +x 00:23:07.970 ************************************ 00:23:07.970 START TEST env_mem_callbacks 00:23:07.970 ************************************ 00:23:07.970 11:33:50 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:23:07.970 EAL: Detected CPU lcores: 72 00:23:07.970 EAL: Detected NUMA nodes: 2 00:23:07.970 EAL: Detected static linkage of DPDK 00:23:07.970 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:23:07.970 EAL: Selected IOVA mode 'VA' 00:23:07.970 EAL: No free 2048 kB hugepages reported on node 1 00:23:07.970 EAL: VFIO support initialized 00:23:07.970 TELEMETRY: No legacy callbacks, legacy socket not created 00:23:07.970 00:23:07.970 00:23:07.970 CUnit - A unit testing framework for C - Version 2.1-3 00:23:07.970 http://cunit.sourceforge.net/ 00:23:07.970 00:23:07.970 00:23:07.970 Suite: memory 00:23:07.970 Test: test ... 00:23:07.970 register 0x200000200000 2097152 00:23:07.970 malloc 3145728 00:23:07.970 register 0x200000400000 4194304 00:23:07.970 buf 0x200000500000 len 3145728 PASSED 00:23:07.970 malloc 64 00:23:07.970 buf 0x2000004fff40 len 64 PASSED 00:23:07.970 malloc 4194304 00:23:07.970 register 0x200000800000 6291456 00:23:07.970 buf 0x200000a00000 len 4194304 PASSED 00:23:07.970 free 0x200000500000 3145728 00:23:07.970 free 0x2000004fff40 64 00:23:07.970 unregister 0x200000400000 4194304 PASSED 00:23:07.970 free 0x200000a00000 4194304 00:23:07.970 unregister 0x200000800000 6291456 PASSED 00:23:07.970 malloc 8388608 00:23:07.970 register 0x200000400000 10485760 00:23:07.970 buf 0x200000600000 len 8388608 PASSED 00:23:07.970 free 0x200000600000 8388608 00:23:07.970 unregister 0x200000400000 10485760 PASSED 00:23:07.970 passed 00:23:07.970 00:23:07.970 Run Summary: Type Total Ran Passed Failed Inactive 00:23:07.970 suites 1 1 n/a 0 0 00:23:07.970 tests 1 1 1 0 0 00:23:07.970 asserts 15 15 15 0 n/a 00:23:07.970 00:23:07.970 Elapsed time = 0.005 seconds 00:23:07.970 00:23:07.970 real 0m0.069s 00:23:07.970 user 0m0.024s 00:23:07.970 sys 0m0.045s 00:23:07.970 11:33:50 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # xtrace_disable 00:23:07.970 11:33:50 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:23:07.970 ************************************ 00:23:07.970 END TEST env_mem_callbacks 00:23:07.970 ************************************ 00:23:07.970 00:23:07.970 real 0m11.579s 00:23:07.970 user 0m8.405s 00:23:07.970 sys 0m2.418s 00:23:07.970 11:33:50 env -- common/autotest_common.sh@1125 -- # xtrace_disable 00:23:07.970 11:33:50 env -- common/autotest_common.sh@10 -- # set +x 00:23:07.970 ************************************ 00:23:07.970 END TEST env 00:23:07.970 ************************************ 00:23:07.970 11:33:50 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/rpc.sh 00:23:07.970 11:33:50 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:23:07.970 11:33:50 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:23:07.970 11:33:50 -- common/autotest_common.sh@10 -- # set +x 00:23:07.970 ************************************ 00:23:07.970 START TEST rpc 00:23:07.970 ************************************ 00:23:07.970 11:33:50 rpc -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/rpc.sh 00:23:07.970 * Looking for test storage... 00:23:07.970 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:23:07.970 11:33:50 rpc -- rpc/rpc.sh@65 -- # spdk_pid=3311779 00:23:07.970 11:33:50 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:23:07.970 11:33:50 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:23:07.970 11:33:50 rpc -- rpc/rpc.sh@67 -- # waitforlisten 3311779 00:23:07.970 11:33:50 rpc -- common/autotest_common.sh@830 -- # '[' -z 3311779 ']' 00:23:07.970 11:33:50 rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:07.970 11:33:50 rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:23:07.970 11:33:50 rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:07.970 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:07.970 11:33:50 rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:23:07.970 11:33:50 rpc -- common/autotest_common.sh@10 -- # set +x 00:23:07.970 [2024-06-10 11:33:50.920344] Starting SPDK v24.09-pre git sha1 a3f6419f1 / DPDK 24.03.0 initialization... 00:23:07.970 [2024-06-10 11:33:50.920418] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3311779 ] 00:23:07.970 EAL: No free 2048 kB hugepages reported on node 1 00:23:07.970 [2024-06-10 11:33:51.001147] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:07.970 [2024-06-10 11:33:51.093339] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:23:07.970 [2024-06-10 11:33:51.093382] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 3311779' to capture a snapshot of events at runtime. 00:23:07.970 [2024-06-10 11:33:51.093391] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:07.970 [2024-06-10 11:33:51.093416] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:07.970 [2024-06-10 11:33:51.093424] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid3311779 for offline analysis/debug. 00:23:07.970 [2024-06-10 11:33:51.093450] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:23:07.970 11:33:51 rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:23:07.970 11:33:51 rpc -- common/autotest_common.sh@863 -- # return 0 00:23:07.970 11:33:51 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:23:07.970 11:33:51 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:23:07.970 11:33:51 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:23:07.970 11:33:51 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:23:07.970 11:33:51 rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:23:07.970 11:33:51 rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:23:07.970 11:33:51 rpc -- common/autotest_common.sh@10 -- # set +x 00:23:07.970 ************************************ 00:23:07.970 START TEST rpc_integrity 00:23:07.970 ************************************ 00:23:07.970 11:33:51 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # rpc_integrity 00:23:07.970 11:33:51 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:23:07.970 11:33:51 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:07.970 11:33:51 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:23:07.970 11:33:51 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:07.970 11:33:51 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:23:07.970 11:33:51 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:23:07.970 11:33:51 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:23:07.970 11:33:51 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:23:07.970 11:33:51 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:07.970 11:33:51 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:23:07.970 11:33:51 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:07.970 11:33:51 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:23:07.970 11:33:51 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:23:07.970 11:33:51 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:07.970 11:33:51 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:23:07.970 11:33:51 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:07.970 11:33:51 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:23:07.970 { 00:23:07.970 "name": "Malloc0", 00:23:07.970 "aliases": [ 00:23:07.970 "25af1933-c77f-4a03-ba4a-fe4777a5b602" 00:23:07.970 ], 00:23:07.970 "product_name": "Malloc disk", 00:23:07.970 "block_size": 512, 00:23:07.970 "num_blocks": 16384, 00:23:07.970 "uuid": "25af1933-c77f-4a03-ba4a-fe4777a5b602", 00:23:07.970 "assigned_rate_limits": { 00:23:07.970 "rw_ios_per_sec": 0, 00:23:07.970 "rw_mbytes_per_sec": 0, 00:23:07.970 "r_mbytes_per_sec": 0, 00:23:07.970 "w_mbytes_per_sec": 0 00:23:07.970 }, 00:23:07.970 "claimed": false, 00:23:07.970 "zoned": false, 00:23:07.970 "supported_io_types": { 00:23:07.970 "read": true, 00:23:07.970 "write": true, 00:23:07.970 "unmap": true, 00:23:07.970 "write_zeroes": true, 00:23:07.970 "flush": true, 00:23:07.970 "reset": true, 00:23:07.970 "compare": false, 00:23:07.970 "compare_and_write": false, 00:23:07.970 "abort": true, 00:23:07.970 "nvme_admin": false, 00:23:07.970 "nvme_io": false 00:23:07.970 }, 00:23:07.970 "memory_domains": [ 00:23:07.970 { 00:23:07.970 "dma_device_id": "system", 00:23:07.970 "dma_device_type": 1 00:23:07.970 }, 00:23:07.970 { 00:23:07.970 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:07.970 "dma_device_type": 2 00:23:07.970 } 00:23:07.970 ], 00:23:07.970 "driver_specific": {} 00:23:07.970 } 00:23:07.970 ]' 00:23:07.970 11:33:51 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:23:07.970 11:33:51 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:23:07.970 11:33:51 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:23:07.970 11:33:51 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:07.970 11:33:51 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:23:08.228 [2024-06-10 11:33:51.921503] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:23:08.228 [2024-06-10 11:33:51.921538] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:08.228 [2024-06-10 11:33:51.921557] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x448aaa0 00:23:08.228 [2024-06-10 11:33:51.921566] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:08.228 [2024-06-10 11:33:51.922485] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:08.228 [2024-06-10 11:33:51.922510] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:23:08.228 Passthru0 00:23:08.228 11:33:51 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:08.228 11:33:51 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:23:08.228 11:33:51 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:08.228 11:33:51 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:23:08.228 11:33:51 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:08.228 11:33:51 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:23:08.228 { 00:23:08.228 "name": "Malloc0", 00:23:08.228 "aliases": [ 00:23:08.228 "25af1933-c77f-4a03-ba4a-fe4777a5b602" 00:23:08.228 ], 00:23:08.228 "product_name": "Malloc disk", 00:23:08.228 "block_size": 512, 00:23:08.228 "num_blocks": 16384, 00:23:08.228 "uuid": "25af1933-c77f-4a03-ba4a-fe4777a5b602", 00:23:08.228 "assigned_rate_limits": { 00:23:08.228 "rw_ios_per_sec": 0, 00:23:08.228 "rw_mbytes_per_sec": 0, 00:23:08.228 "r_mbytes_per_sec": 0, 00:23:08.228 "w_mbytes_per_sec": 0 00:23:08.228 }, 00:23:08.228 "claimed": true, 00:23:08.228 "claim_type": "exclusive_write", 00:23:08.228 "zoned": false, 00:23:08.228 "supported_io_types": { 00:23:08.228 "read": true, 00:23:08.228 "write": true, 00:23:08.228 "unmap": true, 00:23:08.228 "write_zeroes": true, 00:23:08.228 "flush": true, 00:23:08.228 "reset": true, 00:23:08.228 "compare": false, 00:23:08.228 "compare_and_write": false, 00:23:08.228 "abort": true, 00:23:08.228 "nvme_admin": false, 00:23:08.228 "nvme_io": false 00:23:08.228 }, 00:23:08.228 "memory_domains": [ 00:23:08.228 { 00:23:08.228 "dma_device_id": "system", 00:23:08.228 "dma_device_type": 1 00:23:08.228 }, 00:23:08.228 { 00:23:08.228 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:08.228 "dma_device_type": 2 00:23:08.228 } 00:23:08.228 ], 00:23:08.228 "driver_specific": {} 00:23:08.228 }, 00:23:08.228 { 00:23:08.228 "name": "Passthru0", 00:23:08.228 "aliases": [ 00:23:08.228 "7aba791b-4727-554b-a62a-210f6868e3dd" 00:23:08.228 ], 00:23:08.228 "product_name": "passthru", 00:23:08.228 "block_size": 512, 00:23:08.229 "num_blocks": 16384, 00:23:08.229 "uuid": "7aba791b-4727-554b-a62a-210f6868e3dd", 00:23:08.229 "assigned_rate_limits": { 00:23:08.229 "rw_ios_per_sec": 0, 00:23:08.229 "rw_mbytes_per_sec": 0, 00:23:08.229 "r_mbytes_per_sec": 0, 00:23:08.229 "w_mbytes_per_sec": 0 00:23:08.229 }, 00:23:08.229 "claimed": false, 00:23:08.229 "zoned": false, 00:23:08.229 "supported_io_types": { 00:23:08.229 "read": true, 00:23:08.229 "write": true, 00:23:08.229 "unmap": true, 00:23:08.229 "write_zeroes": true, 00:23:08.229 "flush": true, 00:23:08.229 "reset": true, 00:23:08.229 "compare": false, 00:23:08.229 "compare_and_write": false, 00:23:08.229 "abort": true, 00:23:08.229 "nvme_admin": false, 00:23:08.229 "nvme_io": false 00:23:08.229 }, 00:23:08.229 "memory_domains": [ 00:23:08.229 { 00:23:08.229 "dma_device_id": "system", 00:23:08.229 "dma_device_type": 1 00:23:08.229 }, 00:23:08.229 { 00:23:08.229 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:08.229 "dma_device_type": 2 00:23:08.229 } 00:23:08.229 ], 00:23:08.229 "driver_specific": { 00:23:08.229 "passthru": { 00:23:08.229 "name": "Passthru0", 00:23:08.229 "base_bdev_name": "Malloc0" 00:23:08.229 } 00:23:08.229 } 00:23:08.229 } 00:23:08.229 ]' 00:23:08.229 11:33:51 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:23:08.229 11:33:51 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:23:08.229 11:33:51 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:23:08.229 11:33:51 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:08.229 11:33:51 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:23:08.229 11:33:52 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:08.229 11:33:52 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:23:08.229 11:33:52 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:08.229 11:33:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:23:08.229 11:33:52 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:08.229 11:33:52 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:23:08.229 11:33:52 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:08.229 11:33:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:23:08.229 11:33:52 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:08.229 11:33:52 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:23:08.229 11:33:52 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:23:08.229 11:33:52 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:23:08.229 00:23:08.229 real 0m0.292s 00:23:08.229 user 0m0.178s 00:23:08.229 sys 0m0.049s 00:23:08.229 11:33:52 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # xtrace_disable 00:23:08.229 11:33:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:23:08.229 ************************************ 00:23:08.229 END TEST rpc_integrity 00:23:08.229 ************************************ 00:23:08.229 11:33:52 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:23:08.229 11:33:52 rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:23:08.229 11:33:52 rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:23:08.229 11:33:52 rpc -- common/autotest_common.sh@10 -- # set +x 00:23:08.229 ************************************ 00:23:08.229 START TEST rpc_plugins 00:23:08.229 ************************************ 00:23:08.229 11:33:52 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # rpc_plugins 00:23:08.229 11:33:52 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:23:08.229 11:33:52 rpc.rpc_plugins -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:08.229 11:33:52 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:23:08.229 11:33:52 rpc.rpc_plugins -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:08.229 11:33:52 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:23:08.229 11:33:52 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:23:08.229 11:33:52 rpc.rpc_plugins -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:08.229 11:33:52 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:23:08.486 11:33:52 rpc.rpc_plugins -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:08.486 11:33:52 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:23:08.486 { 00:23:08.486 "name": "Malloc1", 00:23:08.486 "aliases": [ 00:23:08.486 "2b12f18a-32f7-4833-ab11-7a2cd5d27068" 00:23:08.486 ], 00:23:08.486 "product_name": "Malloc disk", 00:23:08.486 "block_size": 4096, 00:23:08.486 "num_blocks": 256, 00:23:08.486 "uuid": "2b12f18a-32f7-4833-ab11-7a2cd5d27068", 00:23:08.486 "assigned_rate_limits": { 00:23:08.486 "rw_ios_per_sec": 0, 00:23:08.486 "rw_mbytes_per_sec": 0, 00:23:08.486 "r_mbytes_per_sec": 0, 00:23:08.486 "w_mbytes_per_sec": 0 00:23:08.486 }, 00:23:08.486 "claimed": false, 00:23:08.486 "zoned": false, 00:23:08.486 "supported_io_types": { 00:23:08.486 "read": true, 00:23:08.486 "write": true, 00:23:08.486 "unmap": true, 00:23:08.486 "write_zeroes": true, 00:23:08.486 "flush": true, 00:23:08.486 "reset": true, 00:23:08.486 "compare": false, 00:23:08.486 "compare_and_write": false, 00:23:08.486 "abort": true, 00:23:08.486 "nvme_admin": false, 00:23:08.486 "nvme_io": false 00:23:08.486 }, 00:23:08.486 "memory_domains": [ 00:23:08.486 { 00:23:08.486 "dma_device_id": "system", 00:23:08.486 "dma_device_type": 1 00:23:08.486 }, 00:23:08.486 { 00:23:08.486 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:08.486 "dma_device_type": 2 00:23:08.486 } 00:23:08.486 ], 00:23:08.486 "driver_specific": {} 00:23:08.486 } 00:23:08.486 ]' 00:23:08.486 11:33:52 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:23:08.486 11:33:52 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:23:08.486 11:33:52 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:23:08.486 11:33:52 rpc.rpc_plugins -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:08.486 11:33:52 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:23:08.486 11:33:52 rpc.rpc_plugins -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:08.486 11:33:52 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:23:08.486 11:33:52 rpc.rpc_plugins -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:08.486 11:33:52 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:23:08.486 11:33:52 rpc.rpc_plugins -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:08.486 11:33:52 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:23:08.486 11:33:52 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:23:08.486 11:33:52 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:23:08.486 00:23:08.486 real 0m0.149s 00:23:08.486 user 0m0.088s 00:23:08.486 sys 0m0.024s 00:23:08.486 11:33:52 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # xtrace_disable 00:23:08.486 11:33:52 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:23:08.486 ************************************ 00:23:08.486 END TEST rpc_plugins 00:23:08.486 ************************************ 00:23:08.486 11:33:52 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:23:08.486 11:33:52 rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:23:08.486 11:33:52 rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:23:08.486 11:33:52 rpc -- common/autotest_common.sh@10 -- # set +x 00:23:08.486 ************************************ 00:23:08.486 START TEST rpc_trace_cmd_test 00:23:08.486 ************************************ 00:23:08.486 11:33:52 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # rpc_trace_cmd_test 00:23:08.487 11:33:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:23:08.487 11:33:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:23:08.487 11:33:52 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:08.487 11:33:52 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:23:08.487 11:33:52 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:08.487 11:33:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:23:08.487 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid3311779", 00:23:08.487 "tpoint_group_mask": "0x8", 00:23:08.487 "iscsi_conn": { 00:23:08.487 "mask": "0x2", 00:23:08.487 "tpoint_mask": "0x0" 00:23:08.487 }, 00:23:08.487 "scsi": { 00:23:08.487 "mask": "0x4", 00:23:08.487 "tpoint_mask": "0x0" 00:23:08.487 }, 00:23:08.487 "bdev": { 00:23:08.487 "mask": "0x8", 00:23:08.487 "tpoint_mask": "0xffffffffffffffff" 00:23:08.487 }, 00:23:08.487 "nvmf_rdma": { 00:23:08.487 "mask": "0x10", 00:23:08.487 "tpoint_mask": "0x0" 00:23:08.487 }, 00:23:08.487 "nvmf_tcp": { 00:23:08.487 "mask": "0x20", 00:23:08.487 "tpoint_mask": "0x0" 00:23:08.487 }, 00:23:08.487 "ftl": { 00:23:08.487 "mask": "0x40", 00:23:08.487 "tpoint_mask": "0x0" 00:23:08.487 }, 00:23:08.487 "blobfs": { 00:23:08.487 "mask": "0x80", 00:23:08.487 "tpoint_mask": "0x0" 00:23:08.487 }, 00:23:08.487 "dsa": { 00:23:08.487 "mask": "0x200", 00:23:08.487 "tpoint_mask": "0x0" 00:23:08.487 }, 00:23:08.487 "thread": { 00:23:08.487 "mask": "0x400", 00:23:08.487 "tpoint_mask": "0x0" 00:23:08.487 }, 00:23:08.487 "nvme_pcie": { 00:23:08.487 "mask": "0x800", 00:23:08.487 "tpoint_mask": "0x0" 00:23:08.487 }, 00:23:08.487 "iaa": { 00:23:08.487 "mask": "0x1000", 00:23:08.487 "tpoint_mask": "0x0" 00:23:08.487 }, 00:23:08.487 "nvme_tcp": { 00:23:08.487 "mask": "0x2000", 00:23:08.487 "tpoint_mask": "0x0" 00:23:08.487 }, 00:23:08.487 "bdev_nvme": { 00:23:08.487 "mask": "0x4000", 00:23:08.487 "tpoint_mask": "0x0" 00:23:08.487 }, 00:23:08.487 "sock": { 00:23:08.487 "mask": "0x8000", 00:23:08.487 "tpoint_mask": "0x0" 00:23:08.487 } 00:23:08.487 }' 00:23:08.487 11:33:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:23:08.744 11:33:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:23:08.744 11:33:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:23:08.744 11:33:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:23:08.744 11:33:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:23:08.744 11:33:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:23:08.744 11:33:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:23:08.744 11:33:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:23:08.744 11:33:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:23:08.744 11:33:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:23:08.744 00:23:08.744 real 0m0.235s 00:23:08.744 user 0m0.194s 00:23:08.744 sys 0m0.031s 00:23:08.744 11:33:52 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # xtrace_disable 00:23:08.744 11:33:52 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:23:08.744 ************************************ 00:23:08.744 END TEST rpc_trace_cmd_test 00:23:08.744 ************************************ 00:23:08.744 11:33:52 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:23:08.744 11:33:52 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:23:08.744 11:33:52 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:23:08.744 11:33:52 rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:23:08.744 11:33:52 rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:23:08.744 11:33:52 rpc -- common/autotest_common.sh@10 -- # set +x 00:23:08.744 ************************************ 00:23:08.744 START TEST rpc_daemon_integrity 00:23:08.744 ************************************ 00:23:08.744 11:33:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # rpc_integrity 00:23:09.002 11:33:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:23:09.002 11:33:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:09.002 11:33:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:23:09.002 11:33:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:09.002 11:33:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:23:09.002 11:33:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:23:09.002 11:33:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:23:09.002 11:33:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:23:09.002 11:33:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:09.002 11:33:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:23:09.002 11:33:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:09.002 11:33:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:23:09.002 11:33:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:23:09.002 11:33:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:09.002 11:33:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:23:09.002 11:33:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:09.002 11:33:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:23:09.002 { 00:23:09.002 "name": "Malloc2", 00:23:09.002 "aliases": [ 00:23:09.002 "703d9a86-70c7-400d-b027-3331f6df90d7" 00:23:09.002 ], 00:23:09.002 "product_name": "Malloc disk", 00:23:09.002 "block_size": 512, 00:23:09.002 "num_blocks": 16384, 00:23:09.002 "uuid": "703d9a86-70c7-400d-b027-3331f6df90d7", 00:23:09.002 "assigned_rate_limits": { 00:23:09.002 "rw_ios_per_sec": 0, 00:23:09.002 "rw_mbytes_per_sec": 0, 00:23:09.002 "r_mbytes_per_sec": 0, 00:23:09.002 "w_mbytes_per_sec": 0 00:23:09.002 }, 00:23:09.002 "claimed": false, 00:23:09.002 "zoned": false, 00:23:09.002 "supported_io_types": { 00:23:09.002 "read": true, 00:23:09.002 "write": true, 00:23:09.002 "unmap": true, 00:23:09.002 "write_zeroes": true, 00:23:09.002 "flush": true, 00:23:09.002 "reset": true, 00:23:09.002 "compare": false, 00:23:09.002 "compare_and_write": false, 00:23:09.002 "abort": true, 00:23:09.002 "nvme_admin": false, 00:23:09.002 "nvme_io": false 00:23:09.002 }, 00:23:09.002 "memory_domains": [ 00:23:09.002 { 00:23:09.002 "dma_device_id": "system", 00:23:09.002 "dma_device_type": 1 00:23:09.002 }, 00:23:09.002 { 00:23:09.002 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:09.002 "dma_device_type": 2 00:23:09.002 } 00:23:09.002 ], 00:23:09.002 "driver_specific": {} 00:23:09.002 } 00:23:09.002 ]' 00:23:09.002 11:33:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:23:09.002 11:33:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:23:09.002 11:33:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:23:09.002 11:33:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:09.002 11:33:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:23:09.002 [2024-06-10 11:33:52.831873] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:23:09.002 [2024-06-10 11:33:52.831913] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:09.002 [2024-06-10 11:33:52.831932] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x461d460 00:23:09.002 [2024-06-10 11:33:52.831942] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:09.002 [2024-06-10 11:33:52.832691] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:09.002 [2024-06-10 11:33:52.832714] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:23:09.002 Passthru0 00:23:09.002 11:33:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:09.002 11:33:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:23:09.002 11:33:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:09.002 11:33:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:23:09.002 11:33:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:09.002 11:33:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:23:09.002 { 00:23:09.002 "name": "Malloc2", 00:23:09.002 "aliases": [ 00:23:09.002 "703d9a86-70c7-400d-b027-3331f6df90d7" 00:23:09.002 ], 00:23:09.002 "product_name": "Malloc disk", 00:23:09.002 "block_size": 512, 00:23:09.002 "num_blocks": 16384, 00:23:09.002 "uuid": "703d9a86-70c7-400d-b027-3331f6df90d7", 00:23:09.002 "assigned_rate_limits": { 00:23:09.002 "rw_ios_per_sec": 0, 00:23:09.002 "rw_mbytes_per_sec": 0, 00:23:09.002 "r_mbytes_per_sec": 0, 00:23:09.002 "w_mbytes_per_sec": 0 00:23:09.002 }, 00:23:09.002 "claimed": true, 00:23:09.002 "claim_type": "exclusive_write", 00:23:09.002 "zoned": false, 00:23:09.002 "supported_io_types": { 00:23:09.002 "read": true, 00:23:09.002 "write": true, 00:23:09.002 "unmap": true, 00:23:09.002 "write_zeroes": true, 00:23:09.002 "flush": true, 00:23:09.002 "reset": true, 00:23:09.002 "compare": false, 00:23:09.002 "compare_and_write": false, 00:23:09.002 "abort": true, 00:23:09.002 "nvme_admin": false, 00:23:09.002 "nvme_io": false 00:23:09.002 }, 00:23:09.002 "memory_domains": [ 00:23:09.002 { 00:23:09.002 "dma_device_id": "system", 00:23:09.002 "dma_device_type": 1 00:23:09.002 }, 00:23:09.002 { 00:23:09.002 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:09.002 "dma_device_type": 2 00:23:09.002 } 00:23:09.002 ], 00:23:09.003 "driver_specific": {} 00:23:09.003 }, 00:23:09.003 { 00:23:09.003 "name": "Passthru0", 00:23:09.003 "aliases": [ 00:23:09.003 "4dd56be5-c884-5db0-a596-9873e9d9d0f6" 00:23:09.003 ], 00:23:09.003 "product_name": "passthru", 00:23:09.003 "block_size": 512, 00:23:09.003 "num_blocks": 16384, 00:23:09.003 "uuid": "4dd56be5-c884-5db0-a596-9873e9d9d0f6", 00:23:09.003 "assigned_rate_limits": { 00:23:09.003 "rw_ios_per_sec": 0, 00:23:09.003 "rw_mbytes_per_sec": 0, 00:23:09.003 "r_mbytes_per_sec": 0, 00:23:09.003 "w_mbytes_per_sec": 0 00:23:09.003 }, 00:23:09.003 "claimed": false, 00:23:09.003 "zoned": false, 00:23:09.003 "supported_io_types": { 00:23:09.003 "read": true, 00:23:09.003 "write": true, 00:23:09.003 "unmap": true, 00:23:09.003 "write_zeroes": true, 00:23:09.003 "flush": true, 00:23:09.003 "reset": true, 00:23:09.003 "compare": false, 00:23:09.003 "compare_and_write": false, 00:23:09.003 "abort": true, 00:23:09.003 "nvme_admin": false, 00:23:09.003 "nvme_io": false 00:23:09.003 }, 00:23:09.003 "memory_domains": [ 00:23:09.003 { 00:23:09.003 "dma_device_id": "system", 00:23:09.003 "dma_device_type": 1 00:23:09.003 }, 00:23:09.003 { 00:23:09.003 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:09.003 "dma_device_type": 2 00:23:09.003 } 00:23:09.003 ], 00:23:09.003 "driver_specific": { 00:23:09.003 "passthru": { 00:23:09.003 "name": "Passthru0", 00:23:09.003 "base_bdev_name": "Malloc2" 00:23:09.003 } 00:23:09.003 } 00:23:09.003 } 00:23:09.003 ]' 00:23:09.003 11:33:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:23:09.003 11:33:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:23:09.003 11:33:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:23:09.003 11:33:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:09.003 11:33:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:23:09.003 11:33:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:09.003 11:33:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:23:09.003 11:33:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:09.003 11:33:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:23:09.003 11:33:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:09.003 11:33:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:23:09.003 11:33:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:09.003 11:33:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:23:09.003 11:33:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:09.003 11:33:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:23:09.003 11:33:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:23:09.260 11:33:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:23:09.260 00:23:09.260 real 0m0.286s 00:23:09.260 user 0m0.173s 00:23:09.260 sys 0m0.051s 00:23:09.260 11:33:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # xtrace_disable 00:23:09.260 11:33:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:23:09.260 ************************************ 00:23:09.260 END TEST rpc_daemon_integrity 00:23:09.260 ************************************ 00:23:09.260 11:33:53 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:23:09.260 11:33:53 rpc -- rpc/rpc.sh@84 -- # killprocess 3311779 00:23:09.260 11:33:53 rpc -- common/autotest_common.sh@949 -- # '[' -z 3311779 ']' 00:23:09.260 11:33:53 rpc -- common/autotest_common.sh@953 -- # kill -0 3311779 00:23:09.260 11:33:53 rpc -- common/autotest_common.sh@954 -- # uname 00:23:09.260 11:33:53 rpc -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:23:09.260 11:33:53 rpc -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3311779 00:23:09.260 11:33:53 rpc -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:23:09.260 11:33:53 rpc -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:23:09.260 11:33:53 rpc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3311779' 00:23:09.260 killing process with pid 3311779 00:23:09.260 11:33:53 rpc -- common/autotest_common.sh@968 -- # kill 3311779 00:23:09.260 11:33:53 rpc -- common/autotest_common.sh@973 -- # wait 3311779 00:23:09.517 00:23:09.517 real 0m2.624s 00:23:09.517 user 0m3.267s 00:23:09.517 sys 0m0.864s 00:23:09.517 11:33:53 rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:23:09.517 11:33:53 rpc -- common/autotest_common.sh@10 -- # set +x 00:23:09.517 ************************************ 00:23:09.517 END TEST rpc 00:23:09.517 ************************************ 00:23:09.517 11:33:53 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:23:09.517 11:33:53 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:23:09.517 11:33:53 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:23:09.517 11:33:53 -- common/autotest_common.sh@10 -- # set +x 00:23:09.775 ************************************ 00:23:09.775 START TEST skip_rpc 00:23:09.775 ************************************ 00:23:09.775 11:33:53 skip_rpc -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:23:09.775 * Looking for test storage... 00:23:09.775 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:23:09.775 11:33:53 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:23:09.775 11:33:53 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/log.txt 00:23:09.775 11:33:53 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:23:09.775 11:33:53 skip_rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:23:09.775 11:33:53 skip_rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:23:09.775 11:33:53 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:23:09.775 ************************************ 00:23:09.775 START TEST skip_rpc 00:23:09.775 ************************************ 00:23:09.775 11:33:53 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # test_skip_rpc 00:23:09.775 11:33:53 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=3312326 00:23:09.775 11:33:53 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:23:09.775 11:33:53 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:23:09.775 11:33:53 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:23:09.775 [2024-06-10 11:33:53.637478] Starting SPDK v24.09-pre git sha1 a3f6419f1 / DPDK 24.03.0 initialization... 00:23:09.775 [2024-06-10 11:33:53.637543] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3312326 ] 00:23:09.775 EAL: No free 2048 kB hugepages reported on node 1 00:23:09.775 [2024-06-10 11:33:53.716786] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:10.033 [2024-06-10 11:33:53.805169] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:23:15.291 11:33:58 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:23:15.291 11:33:58 skip_rpc.skip_rpc -- common/autotest_common.sh@649 -- # local es=0 00:23:15.291 11:33:58 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd spdk_get_version 00:23:15.291 11:33:58 skip_rpc.skip_rpc -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:23:15.291 11:33:58 skip_rpc.skip_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:23:15.291 11:33:58 skip_rpc.skip_rpc -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:23:15.291 11:33:58 skip_rpc.skip_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:23:15.291 11:33:58 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # rpc_cmd spdk_get_version 00:23:15.291 11:33:58 skip_rpc.skip_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:15.291 11:33:58 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:23:15.291 11:33:58 skip_rpc.skip_rpc -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:23:15.291 11:33:58 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # es=1 00:23:15.291 11:33:58 skip_rpc.skip_rpc -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:23:15.291 11:33:58 skip_rpc.skip_rpc -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:23:15.291 11:33:58 skip_rpc.skip_rpc -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:23:15.291 11:33:58 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:23:15.291 11:33:58 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 3312326 00:23:15.291 11:33:58 skip_rpc.skip_rpc -- common/autotest_common.sh@949 -- # '[' -z 3312326 ']' 00:23:15.291 11:33:58 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # kill -0 3312326 00:23:15.291 11:33:58 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # uname 00:23:15.291 11:33:58 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:23:15.291 11:33:58 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3312326 00:23:15.291 11:33:58 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:23:15.291 11:33:58 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:23:15.291 11:33:58 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3312326' 00:23:15.291 killing process with pid 3312326 00:23:15.291 11:33:58 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # kill 3312326 00:23:15.291 11:33:58 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # wait 3312326 00:23:15.291 00:23:15.291 real 0m5.413s 00:23:15.291 user 0m5.139s 00:23:15.291 sys 0m0.305s 00:23:15.291 11:33:59 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:23:15.291 11:33:59 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:23:15.291 ************************************ 00:23:15.291 END TEST skip_rpc 00:23:15.291 ************************************ 00:23:15.291 11:33:59 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:23:15.291 11:33:59 skip_rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:23:15.291 11:33:59 skip_rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:23:15.291 11:33:59 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:23:15.291 ************************************ 00:23:15.291 START TEST skip_rpc_with_json 00:23:15.291 ************************************ 00:23:15.291 11:33:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # test_skip_rpc_with_json 00:23:15.291 11:33:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:23:15.291 11:33:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=3313064 00:23:15.291 11:33:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:23:15.291 11:33:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:23:15.291 11:33:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 3313064 00:23:15.291 11:33:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@830 -- # '[' -z 3313064 ']' 00:23:15.291 11:33:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:15.291 11:33:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local max_retries=100 00:23:15.291 11:33:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:15.291 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:15.291 11:33:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # xtrace_disable 00:23:15.291 11:33:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:23:15.291 [2024-06-10 11:33:59.123508] Starting SPDK v24.09-pre git sha1 a3f6419f1 / DPDK 24.03.0 initialization... 00:23:15.291 [2024-06-10 11:33:59.123573] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3313064 ] 00:23:15.291 EAL: No free 2048 kB hugepages reported on node 1 00:23:15.291 [2024-06-10 11:33:59.203165] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:15.549 [2024-06-10 11:33:59.300631] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:23:16.114 11:33:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:23:16.114 11:33:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@863 -- # return 0 00:23:16.114 11:33:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:23:16.114 11:33:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:16.114 11:33:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:23:16.114 [2024-06-10 11:33:59.955180] nvmf_rpc.c:2558:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:23:16.114 request: 00:23:16.114 { 00:23:16.114 "trtype": "tcp", 00:23:16.114 "method": "nvmf_get_transports", 00:23:16.114 "req_id": 1 00:23:16.114 } 00:23:16.114 Got JSON-RPC error response 00:23:16.114 response: 00:23:16.114 { 00:23:16.114 "code": -19, 00:23:16.114 "message": "No such device" 00:23:16.114 } 00:23:16.114 11:33:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:23:16.114 11:33:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:23:16.114 11:33:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:16.114 11:33:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:23:16.114 [2024-06-10 11:33:59.967285] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:16.114 11:33:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:16.114 11:33:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:23:16.114 11:33:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:16.114 11:33:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:23:16.372 11:34:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:16.372 11:34:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:23:16.372 { 00:23:16.372 "subsystems": [ 00:23:16.372 { 00:23:16.372 "subsystem": "scheduler", 00:23:16.372 "config": [ 00:23:16.372 { 00:23:16.372 "method": "framework_set_scheduler", 00:23:16.372 "params": { 00:23:16.372 "name": "static" 00:23:16.372 } 00:23:16.372 } 00:23:16.372 ] 00:23:16.372 }, 00:23:16.372 { 00:23:16.372 "subsystem": "vmd", 00:23:16.372 "config": [] 00:23:16.372 }, 00:23:16.372 { 00:23:16.372 "subsystem": "sock", 00:23:16.372 "config": [ 00:23:16.372 { 00:23:16.372 "method": "sock_set_default_impl", 00:23:16.372 "params": { 00:23:16.372 "impl_name": "posix" 00:23:16.372 } 00:23:16.372 }, 00:23:16.372 { 00:23:16.372 "method": "sock_impl_set_options", 00:23:16.372 "params": { 00:23:16.372 "impl_name": "ssl", 00:23:16.372 "recv_buf_size": 4096, 00:23:16.372 "send_buf_size": 4096, 00:23:16.372 "enable_recv_pipe": true, 00:23:16.372 "enable_quickack": false, 00:23:16.372 "enable_placement_id": 0, 00:23:16.372 "enable_zerocopy_send_server": true, 00:23:16.372 "enable_zerocopy_send_client": false, 00:23:16.372 "zerocopy_threshold": 0, 00:23:16.372 "tls_version": 0, 00:23:16.372 "enable_ktls": false 00:23:16.372 } 00:23:16.372 }, 00:23:16.372 { 00:23:16.372 "method": "sock_impl_set_options", 00:23:16.372 "params": { 00:23:16.372 "impl_name": "posix", 00:23:16.372 "recv_buf_size": 2097152, 00:23:16.372 "send_buf_size": 2097152, 00:23:16.372 "enable_recv_pipe": true, 00:23:16.372 "enable_quickack": false, 00:23:16.372 "enable_placement_id": 0, 00:23:16.372 "enable_zerocopy_send_server": true, 00:23:16.372 "enable_zerocopy_send_client": false, 00:23:16.372 "zerocopy_threshold": 0, 00:23:16.372 "tls_version": 0, 00:23:16.372 "enable_ktls": false 00:23:16.372 } 00:23:16.372 } 00:23:16.372 ] 00:23:16.372 }, 00:23:16.372 { 00:23:16.372 "subsystem": "iobuf", 00:23:16.372 "config": [ 00:23:16.372 { 00:23:16.372 "method": "iobuf_set_options", 00:23:16.372 "params": { 00:23:16.372 "small_pool_count": 8192, 00:23:16.372 "large_pool_count": 1024, 00:23:16.372 "small_bufsize": 8192, 00:23:16.372 "large_bufsize": 135168 00:23:16.372 } 00:23:16.372 } 00:23:16.372 ] 00:23:16.372 }, 00:23:16.372 { 00:23:16.372 "subsystem": "keyring", 00:23:16.372 "config": [] 00:23:16.372 }, 00:23:16.372 { 00:23:16.372 "subsystem": "vfio_user_target", 00:23:16.372 "config": null 00:23:16.372 }, 00:23:16.372 { 00:23:16.372 "subsystem": "accel", 00:23:16.372 "config": [ 00:23:16.372 { 00:23:16.372 "method": "accel_set_options", 00:23:16.372 "params": { 00:23:16.372 "small_cache_size": 128, 00:23:16.372 "large_cache_size": 16, 00:23:16.372 "task_count": 2048, 00:23:16.372 "sequence_count": 2048, 00:23:16.372 "buf_count": 2048 00:23:16.372 } 00:23:16.372 } 00:23:16.372 ] 00:23:16.372 }, 00:23:16.372 { 00:23:16.372 "subsystem": "bdev", 00:23:16.372 "config": [ 00:23:16.372 { 00:23:16.372 "method": "bdev_set_options", 00:23:16.372 "params": { 00:23:16.372 "bdev_io_pool_size": 65535, 00:23:16.372 "bdev_io_cache_size": 256, 00:23:16.372 "bdev_auto_examine": true, 00:23:16.372 "iobuf_small_cache_size": 128, 00:23:16.372 "iobuf_large_cache_size": 16 00:23:16.372 } 00:23:16.372 }, 00:23:16.372 { 00:23:16.372 "method": "bdev_raid_set_options", 00:23:16.372 "params": { 00:23:16.372 "process_window_size_kb": 1024 00:23:16.372 } 00:23:16.372 }, 00:23:16.372 { 00:23:16.372 "method": "bdev_nvme_set_options", 00:23:16.372 "params": { 00:23:16.372 "action_on_timeout": "none", 00:23:16.372 "timeout_us": 0, 00:23:16.372 "timeout_admin_us": 0, 00:23:16.372 "keep_alive_timeout_ms": 10000, 00:23:16.372 "arbitration_burst": 0, 00:23:16.372 "low_priority_weight": 0, 00:23:16.372 "medium_priority_weight": 0, 00:23:16.372 "high_priority_weight": 0, 00:23:16.372 "nvme_adminq_poll_period_us": 10000, 00:23:16.372 "nvme_ioq_poll_period_us": 0, 00:23:16.372 "io_queue_requests": 0, 00:23:16.372 "delay_cmd_submit": true, 00:23:16.372 "transport_retry_count": 4, 00:23:16.372 "bdev_retry_count": 3, 00:23:16.372 "transport_ack_timeout": 0, 00:23:16.372 "ctrlr_loss_timeout_sec": 0, 00:23:16.372 "reconnect_delay_sec": 0, 00:23:16.372 "fast_io_fail_timeout_sec": 0, 00:23:16.372 "disable_auto_failback": false, 00:23:16.372 "generate_uuids": false, 00:23:16.372 "transport_tos": 0, 00:23:16.373 "nvme_error_stat": false, 00:23:16.373 "rdma_srq_size": 0, 00:23:16.373 "io_path_stat": false, 00:23:16.373 "allow_accel_sequence": false, 00:23:16.373 "rdma_max_cq_size": 0, 00:23:16.373 "rdma_cm_event_timeout_ms": 0, 00:23:16.373 "dhchap_digests": [ 00:23:16.373 "sha256", 00:23:16.373 "sha384", 00:23:16.373 "sha512" 00:23:16.373 ], 00:23:16.373 "dhchap_dhgroups": [ 00:23:16.373 "null", 00:23:16.373 "ffdhe2048", 00:23:16.373 "ffdhe3072", 00:23:16.373 "ffdhe4096", 00:23:16.373 "ffdhe6144", 00:23:16.373 "ffdhe8192" 00:23:16.373 ] 00:23:16.373 } 00:23:16.373 }, 00:23:16.373 { 00:23:16.373 "method": "bdev_nvme_set_hotplug", 00:23:16.373 "params": { 00:23:16.373 "period_us": 100000, 00:23:16.373 "enable": false 00:23:16.373 } 00:23:16.373 }, 00:23:16.373 { 00:23:16.373 "method": "bdev_iscsi_set_options", 00:23:16.373 "params": { 00:23:16.373 "timeout_sec": 30 00:23:16.373 } 00:23:16.373 }, 00:23:16.373 { 00:23:16.373 "method": "bdev_wait_for_examine" 00:23:16.373 } 00:23:16.373 ] 00:23:16.373 }, 00:23:16.373 { 00:23:16.373 "subsystem": "nvmf", 00:23:16.373 "config": [ 00:23:16.373 { 00:23:16.373 "method": "nvmf_set_config", 00:23:16.373 "params": { 00:23:16.373 "discovery_filter": "match_any", 00:23:16.373 "admin_cmd_passthru": { 00:23:16.373 "identify_ctrlr": false 00:23:16.373 } 00:23:16.373 } 00:23:16.373 }, 00:23:16.373 { 00:23:16.373 "method": "nvmf_set_max_subsystems", 00:23:16.373 "params": { 00:23:16.373 "max_subsystems": 1024 00:23:16.373 } 00:23:16.373 }, 00:23:16.373 { 00:23:16.373 "method": "nvmf_set_crdt", 00:23:16.373 "params": { 00:23:16.373 "crdt1": 0, 00:23:16.373 "crdt2": 0, 00:23:16.373 "crdt3": 0 00:23:16.373 } 00:23:16.373 }, 00:23:16.373 { 00:23:16.373 "method": "nvmf_create_transport", 00:23:16.373 "params": { 00:23:16.373 "trtype": "TCP", 00:23:16.373 "max_queue_depth": 128, 00:23:16.373 "max_io_qpairs_per_ctrlr": 127, 00:23:16.373 "in_capsule_data_size": 4096, 00:23:16.373 "max_io_size": 131072, 00:23:16.373 "io_unit_size": 131072, 00:23:16.373 "max_aq_depth": 128, 00:23:16.373 "num_shared_buffers": 511, 00:23:16.373 "buf_cache_size": 4294967295, 00:23:16.373 "dif_insert_or_strip": false, 00:23:16.373 "zcopy": false, 00:23:16.373 "c2h_success": true, 00:23:16.373 "sock_priority": 0, 00:23:16.373 "abort_timeout_sec": 1, 00:23:16.373 "ack_timeout": 0, 00:23:16.373 "data_wr_pool_size": 0 00:23:16.373 } 00:23:16.373 } 00:23:16.373 ] 00:23:16.373 }, 00:23:16.373 { 00:23:16.373 "subsystem": "nbd", 00:23:16.373 "config": [] 00:23:16.373 }, 00:23:16.373 { 00:23:16.373 "subsystem": "ublk", 00:23:16.373 "config": [] 00:23:16.373 }, 00:23:16.373 { 00:23:16.373 "subsystem": "vhost_blk", 00:23:16.373 "config": [] 00:23:16.373 }, 00:23:16.373 { 00:23:16.373 "subsystem": "scsi", 00:23:16.373 "config": null 00:23:16.373 }, 00:23:16.373 { 00:23:16.373 "subsystem": "iscsi", 00:23:16.373 "config": [ 00:23:16.373 { 00:23:16.373 "method": "iscsi_set_options", 00:23:16.373 "params": { 00:23:16.373 "node_base": "iqn.2016-06.io.spdk", 00:23:16.373 "max_sessions": 128, 00:23:16.373 "max_connections_per_session": 2, 00:23:16.373 "max_queue_depth": 64, 00:23:16.373 "default_time2wait": 2, 00:23:16.373 "default_time2retain": 20, 00:23:16.373 "first_burst_length": 8192, 00:23:16.373 "immediate_data": true, 00:23:16.373 "allow_duplicated_isid": false, 00:23:16.373 "error_recovery_level": 0, 00:23:16.373 "nop_timeout": 60, 00:23:16.373 "nop_in_interval": 30, 00:23:16.373 "disable_chap": false, 00:23:16.373 "require_chap": false, 00:23:16.373 "mutual_chap": false, 00:23:16.373 "chap_group": 0, 00:23:16.373 "max_large_datain_per_connection": 64, 00:23:16.373 "max_r2t_per_connection": 4, 00:23:16.373 "pdu_pool_size": 36864, 00:23:16.373 "immediate_data_pool_size": 16384, 00:23:16.373 "data_out_pool_size": 2048 00:23:16.373 } 00:23:16.373 } 00:23:16.373 ] 00:23:16.373 }, 00:23:16.373 { 00:23:16.373 "subsystem": "vhost_scsi", 00:23:16.373 "config": [] 00:23:16.373 } 00:23:16.373 ] 00:23:16.373 } 00:23:16.373 11:34:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:23:16.373 11:34:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 3313064 00:23:16.373 11:34:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@949 -- # '[' -z 3313064 ']' 00:23:16.373 11:34:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # kill -0 3313064 00:23:16.373 11:34:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # uname 00:23:16.373 11:34:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:23:16.373 11:34:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3313064 00:23:16.373 11:34:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:23:16.373 11:34:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:23:16.373 11:34:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3313064' 00:23:16.373 killing process with pid 3313064 00:23:16.373 11:34:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # kill 3313064 00:23:16.373 11:34:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # wait 3313064 00:23:16.630 11:34:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=3313284 00:23:16.630 11:34:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:23:16.630 11:34:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:23:21.885 11:34:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 3313284 00:23:21.885 11:34:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@949 -- # '[' -z 3313284 ']' 00:23:21.885 11:34:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # kill -0 3313284 00:23:21.885 11:34:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # uname 00:23:21.885 11:34:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:23:21.885 11:34:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3313284 00:23:21.885 11:34:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:23:21.885 11:34:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:23:21.885 11:34:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3313284' 00:23:21.885 killing process with pid 3313284 00:23:21.885 11:34:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # kill 3313284 00:23:21.885 11:34:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # wait 3313284 00:23:22.142 11:34:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/log.txt 00:23:22.142 11:34:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/log.txt 00:23:22.142 00:23:22.142 real 0m6.843s 00:23:22.142 user 0m6.582s 00:23:22.142 sys 0m0.715s 00:23:22.142 11:34:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # xtrace_disable 00:23:22.142 11:34:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:23:22.142 ************************************ 00:23:22.142 END TEST skip_rpc_with_json 00:23:22.142 ************************************ 00:23:22.142 11:34:05 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:23:22.142 11:34:05 skip_rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:23:22.142 11:34:05 skip_rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:23:22.142 11:34:05 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:23:22.142 ************************************ 00:23:22.142 START TEST skip_rpc_with_delay 00:23:22.142 ************************************ 00:23:22.142 11:34:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # test_skip_rpc_with_delay 00:23:22.142 11:34:06 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:23:22.142 11:34:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@649 -- # local es=0 00:23:22.142 11:34:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:23:22.142 11:34:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:23:22.142 11:34:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:23:22.142 11:34:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:23:22.142 11:34:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:23:22.142 11:34:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:23:22.142 11:34:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:23:22.142 11:34:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:23:22.142 11:34:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:23:22.142 11:34:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:23:22.142 [2024-06-10 11:34:06.059214] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:23:22.142 [2024-06-10 11:34:06.059375] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:23:22.142 11:34:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # es=1 00:23:22.142 11:34:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:23:22.142 11:34:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:23:22.142 11:34:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:23:22.142 00:23:22.142 real 0m0.049s 00:23:22.142 user 0m0.024s 00:23:22.142 sys 0m0.025s 00:23:22.142 11:34:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # xtrace_disable 00:23:22.142 11:34:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:23:22.142 ************************************ 00:23:22.142 END TEST skip_rpc_with_delay 00:23:22.142 ************************************ 00:23:22.400 11:34:06 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:23:22.400 11:34:06 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:23:22.400 11:34:06 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:23:22.400 11:34:06 skip_rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:23:22.400 11:34:06 skip_rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:23:22.400 11:34:06 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:23:22.400 ************************************ 00:23:22.400 START TEST exit_on_failed_rpc_init 00:23:22.400 ************************************ 00:23:22.400 11:34:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # test_exit_on_failed_rpc_init 00:23:22.400 11:34:06 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:23:22.400 11:34:06 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=3314072 00:23:22.400 11:34:06 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 3314072 00:23:22.400 11:34:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@830 -- # '[' -z 3314072 ']' 00:23:22.400 11:34:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:22.400 11:34:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local max_retries=100 00:23:22.400 11:34:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:22.400 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:22.400 11:34:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # xtrace_disable 00:23:22.400 11:34:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:23:22.400 [2024-06-10 11:34:06.166860] Starting SPDK v24.09-pre git sha1 a3f6419f1 / DPDK 24.03.0 initialization... 00:23:22.400 [2024-06-10 11:34:06.166906] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3314072 ] 00:23:22.400 EAL: No free 2048 kB hugepages reported on node 1 00:23:22.400 [2024-06-10 11:34:06.244541] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:22.400 [2024-06-10 11:34:06.341444] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:23:23.333 11:34:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:23:23.333 11:34:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@863 -- # return 0 00:23:23.333 11:34:07 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:23:23.333 11:34:07 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:23:23.333 11:34:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@649 -- # local es=0 00:23:23.333 11:34:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:23:23.333 11:34:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:23:23.333 11:34:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:23:23.333 11:34:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:23:23.333 11:34:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:23:23.333 11:34:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:23:23.333 11:34:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:23:23.333 11:34:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:23:23.333 11:34:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:23:23.333 11:34:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:23:23.333 [2024-06-10 11:34:07.035564] Starting SPDK v24.09-pre git sha1 a3f6419f1 / DPDK 24.03.0 initialization... 00:23:23.333 [2024-06-10 11:34:07.035644] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3314092 ] 00:23:23.333 EAL: No free 2048 kB hugepages reported on node 1 00:23:23.333 [2024-06-10 11:34:07.116186] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:23.333 [2024-06-10 11:34:07.207034] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:23:23.333 [2024-06-10 11:34:07.207136] rpc.c: 181:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:23:23.333 [2024-06-10 11:34:07.207151] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:23:23.333 [2024-06-10 11:34:07.207160] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:23:23.591 11:34:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # es=234 00:23:23.591 11:34:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:23:23.591 11:34:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # es=106 00:23:23.591 11:34:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # case "$es" in 00:23:23.591 11:34:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@669 -- # es=1 00:23:23.591 11:34:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:23:23.591 11:34:07 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:23:23.591 11:34:07 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 3314072 00:23:23.591 11:34:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@949 -- # '[' -z 3314072 ']' 00:23:23.591 11:34:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # kill -0 3314072 00:23:23.591 11:34:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # uname 00:23:23.591 11:34:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:23:23.591 11:34:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3314072 00:23:23.591 11:34:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:23:23.591 11:34:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:23:23.591 11:34:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3314072' 00:23:23.591 killing process with pid 3314072 00:23:23.591 11:34:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # kill 3314072 00:23:23.591 11:34:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # wait 3314072 00:23:23.851 00:23:23.851 real 0m1.526s 00:23:23.851 user 0m1.693s 00:23:23.851 sys 0m0.485s 00:23:23.851 11:34:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # xtrace_disable 00:23:23.851 11:34:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:23:23.851 ************************************ 00:23:23.851 END TEST exit_on_failed_rpc_init 00:23:23.851 ************************************ 00:23:23.851 11:34:07 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:23:23.851 00:23:23.851 real 0m14.253s 00:23:23.851 user 0m13.594s 00:23:23.851 sys 0m1.833s 00:23:23.851 11:34:07 skip_rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:23:23.851 11:34:07 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:23:23.851 ************************************ 00:23:23.851 END TEST skip_rpc 00:23:23.851 ************************************ 00:23:23.851 11:34:07 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:23:23.851 11:34:07 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:23:23.851 11:34:07 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:23:23.851 11:34:07 -- common/autotest_common.sh@10 -- # set +x 00:23:24.110 ************************************ 00:23:24.110 START TEST rpc_client 00:23:24.110 ************************************ 00:23:24.110 11:34:07 rpc_client -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:23:24.110 * Looking for test storage... 00:23:24.110 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client 00:23:24.110 11:34:07 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:23:24.110 OK 00:23:24.110 11:34:07 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:23:24.110 00:23:24.110 real 0m0.116s 00:23:24.110 user 0m0.043s 00:23:24.110 sys 0m0.082s 00:23:24.110 11:34:07 rpc_client -- common/autotest_common.sh@1125 -- # xtrace_disable 00:23:24.110 11:34:07 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:23:24.110 ************************************ 00:23:24.110 END TEST rpc_client 00:23:24.110 ************************************ 00:23:24.110 11:34:07 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config.sh 00:23:24.110 11:34:07 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:23:24.110 11:34:07 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:23:24.110 11:34:07 -- common/autotest_common.sh@10 -- # set +x 00:23:24.110 ************************************ 00:23:24.110 START TEST json_config 00:23:24.110 ************************************ 00:23:24.110 11:34:07 json_config -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config.sh 00:23:24.369 11:34:08 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh 00:23:24.369 11:34:08 json_config -- nvmf/common.sh@7 -- # uname -s 00:23:24.369 11:34:08 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:24.369 11:34:08 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:24.369 11:34:08 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:24.369 11:34:08 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:24.369 11:34:08 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:24.369 11:34:08 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:24.369 11:34:08 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:24.369 11:34:08 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:24.369 11:34:08 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:24.369 11:34:08 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:24.369 11:34:08 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80e69187-586b-e711-906e-0017a4403562 00:23:24.369 11:34:08 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=80e69187-586b-e711-906e-0017a4403562 00:23:24.369 11:34:08 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:24.369 11:34:08 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:24.369 11:34:08 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:23:24.369 11:34:08 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:24.369 11:34:08 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:23:24.369 11:34:08 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:24.369 11:34:08 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:24.369 11:34:08 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:24.369 11:34:08 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:24.369 11:34:08 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:24.369 11:34:08 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:24.369 11:34:08 json_config -- paths/export.sh@5 -- # export PATH 00:23:24.369 11:34:08 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:24.369 11:34:08 json_config -- nvmf/common.sh@47 -- # : 0 00:23:24.369 11:34:08 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:24.369 11:34:08 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:24.369 11:34:08 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:24.369 11:34:08 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:24.369 11:34:08 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:24.369 11:34:08 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:24.369 11:34:08 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:24.370 11:34:08 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:24.370 11:34:08 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/common.sh 00:23:24.370 11:34:08 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:23:24.370 11:34:08 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:23:24.370 11:34:08 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:23:24.370 11:34:08 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:23:24.370 11:34:08 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:23:24.370 WARNING: No tests are enabled so not running JSON configuration tests 00:23:24.370 11:34:08 json_config -- json_config/json_config.sh@28 -- # exit 0 00:23:24.370 00:23:24.370 real 0m0.102s 00:23:24.370 user 0m0.043s 00:23:24.370 sys 0m0.060s 00:23:24.370 11:34:08 json_config -- common/autotest_common.sh@1125 -- # xtrace_disable 00:23:24.370 11:34:08 json_config -- common/autotest_common.sh@10 -- # set +x 00:23:24.370 ************************************ 00:23:24.370 END TEST json_config 00:23:24.370 ************************************ 00:23:24.370 11:34:08 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:23:24.370 11:34:08 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:23:24.370 11:34:08 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:23:24.370 11:34:08 -- common/autotest_common.sh@10 -- # set +x 00:23:24.370 ************************************ 00:23:24.370 START TEST json_config_extra_key 00:23:24.370 ************************************ 00:23:24.370 11:34:08 json_config_extra_key -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:23:24.370 11:34:08 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh 00:23:24.370 11:34:08 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:23:24.370 11:34:08 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:24.370 11:34:08 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:24.370 11:34:08 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:24.370 11:34:08 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:24.370 11:34:08 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:24.370 11:34:08 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:24.370 11:34:08 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:24.370 11:34:08 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:24.370 11:34:08 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:24.370 11:34:08 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:24.370 11:34:08 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80e69187-586b-e711-906e-0017a4403562 00:23:24.370 11:34:08 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=80e69187-586b-e711-906e-0017a4403562 00:23:24.370 11:34:08 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:24.370 11:34:08 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:24.370 11:34:08 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:23:24.370 11:34:08 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:24.370 11:34:08 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:23:24.370 11:34:08 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:24.370 11:34:08 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:24.370 11:34:08 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:24.370 11:34:08 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:24.370 11:34:08 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:24.370 11:34:08 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:24.370 11:34:08 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:23:24.370 11:34:08 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:24.370 11:34:08 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:23:24.370 11:34:08 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:24.370 11:34:08 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:24.370 11:34:08 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:24.370 11:34:08 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:24.370 11:34:08 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:24.370 11:34:08 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:24.370 11:34:08 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:24.370 11:34:08 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:24.370 11:34:08 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/common.sh 00:23:24.370 11:34:08 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:23:24.370 11:34:08 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:23:24.370 11:34:08 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:23:24.370 11:34:08 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:23:24.370 11:34:08 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:23:24.370 11:34:08 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:23:24.370 11:34:08 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/extra_key.json') 00:23:24.370 11:34:08 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:23:24.370 11:34:08 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:23:24.370 11:34:08 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:23:24.370 INFO: launching applications... 00:23:24.370 11:34:08 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/extra_key.json 00:23:24.370 11:34:08 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:23:24.370 11:34:08 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:23:24.370 11:34:08 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:23:24.370 11:34:08 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:23:24.370 11:34:08 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:23:24.370 11:34:08 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:23:24.370 11:34:08 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:23:24.370 11:34:08 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=3314410 00:23:24.370 11:34:08 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:23:24.370 Waiting for target to run... 00:23:24.370 11:34:08 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 3314410 /var/tmp/spdk_tgt.sock 00:23:24.370 11:34:08 json_config_extra_key -- common/autotest_common.sh@830 -- # '[' -z 3314410 ']' 00:23:24.370 11:34:08 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/extra_key.json 00:23:24.370 11:34:08 json_config_extra_key -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:23:24.370 11:34:08 json_config_extra_key -- common/autotest_common.sh@835 -- # local max_retries=100 00:23:24.370 11:34:08 json_config_extra_key -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:23:24.370 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:23:24.370 11:34:08 json_config_extra_key -- common/autotest_common.sh@839 -- # xtrace_disable 00:23:24.370 11:34:08 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:23:24.370 [2024-06-10 11:34:08.293631] Starting SPDK v24.09-pre git sha1 a3f6419f1 / DPDK 24.03.0 initialization... 00:23:24.370 [2024-06-10 11:34:08.293708] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3314410 ] 00:23:24.628 EAL: No free 2048 kB hugepages reported on node 1 00:23:24.887 [2024-06-10 11:34:08.595613] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:24.887 [2024-06-10 11:34:08.674482] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:23:25.454 11:34:09 json_config_extra_key -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:23:25.454 11:34:09 json_config_extra_key -- common/autotest_common.sh@863 -- # return 0 00:23:25.454 11:34:09 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:23:25.454 00:23:25.454 11:34:09 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:23:25.454 INFO: shutting down applications... 00:23:25.454 11:34:09 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:23:25.454 11:34:09 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:23:25.454 11:34:09 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:23:25.454 11:34:09 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 3314410 ]] 00:23:25.454 11:34:09 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 3314410 00:23:25.454 11:34:09 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:23:25.454 11:34:09 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:23:25.454 11:34:09 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3314410 00:23:25.454 11:34:09 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:23:25.712 11:34:09 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:23:25.712 11:34:09 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:23:25.712 11:34:09 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3314410 00:23:25.712 11:34:09 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:23:25.712 11:34:09 json_config_extra_key -- json_config/common.sh@43 -- # break 00:23:25.712 11:34:09 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:23:25.712 11:34:09 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:23:25.712 SPDK target shutdown done 00:23:25.712 11:34:09 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:23:25.712 Success 00:23:25.712 00:23:25.712 real 0m1.452s 00:23:25.712 user 0m1.230s 00:23:25.712 sys 0m0.419s 00:23:25.712 11:34:09 json_config_extra_key -- common/autotest_common.sh@1125 -- # xtrace_disable 00:23:25.712 11:34:09 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:23:25.712 ************************************ 00:23:25.712 END TEST json_config_extra_key 00:23:25.712 ************************************ 00:23:25.970 11:34:09 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:23:25.970 11:34:09 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:23:25.970 11:34:09 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:23:25.970 11:34:09 -- common/autotest_common.sh@10 -- # set +x 00:23:25.970 ************************************ 00:23:25.970 START TEST alias_rpc 00:23:25.970 ************************************ 00:23:25.970 11:34:09 alias_rpc -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:23:25.970 * Looking for test storage... 00:23:25.970 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/alias_rpc 00:23:25.970 11:34:09 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:23:25.970 11:34:09 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:23:25.970 11:34:09 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=3314662 00:23:25.970 11:34:09 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 3314662 00:23:25.970 11:34:09 alias_rpc -- common/autotest_common.sh@830 -- # '[' -z 3314662 ']' 00:23:25.970 11:34:09 alias_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:25.970 11:34:09 alias_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:23:25.970 11:34:09 alias_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:25.970 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:25.970 11:34:09 alias_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:23:25.970 11:34:09 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:23:25.970 [2024-06-10 11:34:09.808068] Starting SPDK v24.09-pre git sha1 a3f6419f1 / DPDK 24.03.0 initialization... 00:23:25.970 [2024-06-10 11:34:09.808125] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3314662 ] 00:23:25.970 EAL: No free 2048 kB hugepages reported on node 1 00:23:25.970 [2024-06-10 11:34:09.887326] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:26.228 [2024-06-10 11:34:09.987686] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:23:26.794 11:34:10 alias_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:23:26.794 11:34:10 alias_rpc -- common/autotest_common.sh@863 -- # return 0 00:23:26.794 11:34:10 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py load_config -i 00:23:27.052 11:34:10 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 3314662 00:23:27.052 11:34:10 alias_rpc -- common/autotest_common.sh@949 -- # '[' -z 3314662 ']' 00:23:27.052 11:34:10 alias_rpc -- common/autotest_common.sh@953 -- # kill -0 3314662 00:23:27.052 11:34:10 alias_rpc -- common/autotest_common.sh@954 -- # uname 00:23:27.052 11:34:10 alias_rpc -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:23:27.052 11:34:10 alias_rpc -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3314662 00:23:27.052 11:34:10 alias_rpc -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:23:27.052 11:34:10 alias_rpc -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:23:27.052 11:34:10 alias_rpc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3314662' 00:23:27.052 killing process with pid 3314662 00:23:27.052 11:34:10 alias_rpc -- common/autotest_common.sh@968 -- # kill 3314662 00:23:27.052 11:34:10 alias_rpc -- common/autotest_common.sh@973 -- # wait 3314662 00:23:27.311 00:23:27.311 real 0m1.507s 00:23:27.311 user 0m1.569s 00:23:27.311 sys 0m0.469s 00:23:27.311 11:34:11 alias_rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:23:27.311 11:34:11 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:23:27.311 ************************************ 00:23:27.311 END TEST alias_rpc 00:23:27.311 ************************************ 00:23:27.311 11:34:11 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:23:27.311 11:34:11 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/tcp.sh 00:23:27.311 11:34:11 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:23:27.311 11:34:11 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:23:27.311 11:34:11 -- common/autotest_common.sh@10 -- # set +x 00:23:27.568 ************************************ 00:23:27.568 START TEST spdkcli_tcp 00:23:27.568 ************************************ 00:23:27.568 11:34:11 spdkcli_tcp -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/tcp.sh 00:23:27.568 * Looking for test storage... 00:23:27.568 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli 00:23:27.568 11:34:11 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/common.sh 00:23:27.568 11:34:11 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:23:27.568 11:34:11 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/clear_config.py 00:23:27.568 11:34:11 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:23:27.568 11:34:11 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:23:27.568 11:34:11 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:27.568 11:34:11 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:23:27.568 11:34:11 spdkcli_tcp -- common/autotest_common.sh@723 -- # xtrace_disable 00:23:27.568 11:34:11 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:27.568 11:34:11 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=3315029 00:23:27.568 11:34:11 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 3315029 00:23:27.568 11:34:11 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:23:27.568 11:34:11 spdkcli_tcp -- common/autotest_common.sh@830 -- # '[' -z 3315029 ']' 00:23:27.568 11:34:11 spdkcli_tcp -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:27.568 11:34:11 spdkcli_tcp -- common/autotest_common.sh@835 -- # local max_retries=100 00:23:27.568 11:34:11 spdkcli_tcp -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:27.568 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:27.568 11:34:11 spdkcli_tcp -- common/autotest_common.sh@839 -- # xtrace_disable 00:23:27.568 11:34:11 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:27.568 [2024-06-10 11:34:11.440391] Starting SPDK v24.09-pre git sha1 a3f6419f1 / DPDK 24.03.0 initialization... 00:23:27.568 [2024-06-10 11:34:11.440473] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3315029 ] 00:23:27.568 EAL: No free 2048 kB hugepages reported on node 1 00:23:27.826 [2024-06-10 11:34:11.522491] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:27.826 [2024-06-10 11:34:11.611117] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:23:27.826 [2024-06-10 11:34:11.611119] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:23:28.391 11:34:12 spdkcli_tcp -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:23:28.391 11:34:12 spdkcli_tcp -- common/autotest_common.sh@863 -- # return 0 00:23:28.391 11:34:12 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=3315051 00:23:28.391 11:34:12 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:23:28.391 11:34:12 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:23:28.649 [ 00:23:28.649 "spdk_get_version", 00:23:28.649 "rpc_get_methods", 00:23:28.649 "trace_get_info", 00:23:28.649 "trace_get_tpoint_group_mask", 00:23:28.649 "trace_disable_tpoint_group", 00:23:28.649 "trace_enable_tpoint_group", 00:23:28.649 "trace_clear_tpoint_mask", 00:23:28.649 "trace_set_tpoint_mask", 00:23:28.649 "vfu_tgt_set_base_path", 00:23:28.649 "framework_get_pci_devices", 00:23:28.649 "framework_get_config", 00:23:28.649 "framework_get_subsystems", 00:23:28.649 "keyring_get_keys", 00:23:28.649 "iobuf_get_stats", 00:23:28.649 "iobuf_set_options", 00:23:28.649 "sock_get_default_impl", 00:23:28.649 "sock_set_default_impl", 00:23:28.649 "sock_impl_set_options", 00:23:28.649 "sock_impl_get_options", 00:23:28.649 "vmd_rescan", 00:23:28.649 "vmd_remove_device", 00:23:28.649 "vmd_enable", 00:23:28.649 "accel_get_stats", 00:23:28.649 "accel_set_options", 00:23:28.649 "accel_set_driver", 00:23:28.649 "accel_crypto_key_destroy", 00:23:28.649 "accel_crypto_keys_get", 00:23:28.649 "accel_crypto_key_create", 00:23:28.649 "accel_assign_opc", 00:23:28.649 "accel_get_module_info", 00:23:28.649 "accel_get_opc_assignments", 00:23:28.649 "notify_get_notifications", 00:23:28.649 "notify_get_types", 00:23:28.649 "bdev_get_histogram", 00:23:28.649 "bdev_enable_histogram", 00:23:28.649 "bdev_set_qos_limit", 00:23:28.649 "bdev_set_qd_sampling_period", 00:23:28.649 "bdev_get_bdevs", 00:23:28.649 "bdev_reset_iostat", 00:23:28.649 "bdev_get_iostat", 00:23:28.649 "bdev_examine", 00:23:28.649 "bdev_wait_for_examine", 00:23:28.649 "bdev_set_options", 00:23:28.649 "scsi_get_devices", 00:23:28.649 "thread_set_cpumask", 00:23:28.649 "framework_get_scheduler", 00:23:28.649 "framework_set_scheduler", 00:23:28.649 "framework_get_reactors", 00:23:28.649 "thread_get_io_channels", 00:23:28.649 "thread_get_pollers", 00:23:28.649 "thread_get_stats", 00:23:28.649 "framework_monitor_context_switch", 00:23:28.649 "spdk_kill_instance", 00:23:28.649 "log_enable_timestamps", 00:23:28.649 "log_get_flags", 00:23:28.649 "log_clear_flag", 00:23:28.649 "log_set_flag", 00:23:28.649 "log_get_level", 00:23:28.649 "log_set_level", 00:23:28.649 "log_get_print_level", 00:23:28.649 "log_set_print_level", 00:23:28.649 "framework_enable_cpumask_locks", 00:23:28.649 "framework_disable_cpumask_locks", 00:23:28.649 "framework_wait_init", 00:23:28.649 "framework_start_init", 00:23:28.649 "virtio_blk_create_transport", 00:23:28.649 "virtio_blk_get_transports", 00:23:28.649 "vhost_controller_set_coalescing", 00:23:28.649 "vhost_get_controllers", 00:23:28.649 "vhost_delete_controller", 00:23:28.649 "vhost_create_blk_controller", 00:23:28.649 "vhost_scsi_controller_remove_target", 00:23:28.649 "vhost_scsi_controller_add_target", 00:23:28.649 "vhost_start_scsi_controller", 00:23:28.649 "vhost_create_scsi_controller", 00:23:28.649 "ublk_recover_disk", 00:23:28.649 "ublk_get_disks", 00:23:28.649 "ublk_stop_disk", 00:23:28.649 "ublk_start_disk", 00:23:28.649 "ublk_destroy_target", 00:23:28.649 "ublk_create_target", 00:23:28.649 "nbd_get_disks", 00:23:28.649 "nbd_stop_disk", 00:23:28.649 "nbd_start_disk", 00:23:28.649 "env_dpdk_get_mem_stats", 00:23:28.649 "nvmf_stop_mdns_prr", 00:23:28.649 "nvmf_publish_mdns_prr", 00:23:28.649 "nvmf_subsystem_get_listeners", 00:23:28.649 "nvmf_subsystem_get_qpairs", 00:23:28.649 "nvmf_subsystem_get_controllers", 00:23:28.649 "nvmf_get_stats", 00:23:28.649 "nvmf_get_transports", 00:23:28.649 "nvmf_create_transport", 00:23:28.649 "nvmf_get_targets", 00:23:28.649 "nvmf_delete_target", 00:23:28.649 "nvmf_create_target", 00:23:28.649 "nvmf_subsystem_allow_any_host", 00:23:28.649 "nvmf_subsystem_remove_host", 00:23:28.650 "nvmf_subsystem_add_host", 00:23:28.650 "nvmf_ns_remove_host", 00:23:28.650 "nvmf_ns_add_host", 00:23:28.650 "nvmf_subsystem_remove_ns", 00:23:28.650 "nvmf_subsystem_add_ns", 00:23:28.650 "nvmf_subsystem_listener_set_ana_state", 00:23:28.650 "nvmf_discovery_get_referrals", 00:23:28.650 "nvmf_discovery_remove_referral", 00:23:28.650 "nvmf_discovery_add_referral", 00:23:28.650 "nvmf_subsystem_remove_listener", 00:23:28.650 "nvmf_subsystem_add_listener", 00:23:28.650 "nvmf_delete_subsystem", 00:23:28.650 "nvmf_create_subsystem", 00:23:28.650 "nvmf_get_subsystems", 00:23:28.650 "nvmf_set_crdt", 00:23:28.650 "nvmf_set_config", 00:23:28.650 "nvmf_set_max_subsystems", 00:23:28.650 "iscsi_get_histogram", 00:23:28.650 "iscsi_enable_histogram", 00:23:28.650 "iscsi_set_options", 00:23:28.650 "iscsi_get_auth_groups", 00:23:28.650 "iscsi_auth_group_remove_secret", 00:23:28.650 "iscsi_auth_group_add_secret", 00:23:28.650 "iscsi_delete_auth_group", 00:23:28.650 "iscsi_create_auth_group", 00:23:28.650 "iscsi_set_discovery_auth", 00:23:28.650 "iscsi_get_options", 00:23:28.650 "iscsi_target_node_request_logout", 00:23:28.650 "iscsi_target_node_set_redirect", 00:23:28.650 "iscsi_target_node_set_auth", 00:23:28.650 "iscsi_target_node_add_lun", 00:23:28.650 "iscsi_get_stats", 00:23:28.650 "iscsi_get_connections", 00:23:28.650 "iscsi_portal_group_set_auth", 00:23:28.650 "iscsi_start_portal_group", 00:23:28.650 "iscsi_delete_portal_group", 00:23:28.650 "iscsi_create_portal_group", 00:23:28.650 "iscsi_get_portal_groups", 00:23:28.650 "iscsi_delete_target_node", 00:23:28.650 "iscsi_target_node_remove_pg_ig_maps", 00:23:28.650 "iscsi_target_node_add_pg_ig_maps", 00:23:28.650 "iscsi_create_target_node", 00:23:28.650 "iscsi_get_target_nodes", 00:23:28.650 "iscsi_delete_initiator_group", 00:23:28.650 "iscsi_initiator_group_remove_initiators", 00:23:28.650 "iscsi_initiator_group_add_initiators", 00:23:28.650 "iscsi_create_initiator_group", 00:23:28.650 "iscsi_get_initiator_groups", 00:23:28.650 "keyring_linux_set_options", 00:23:28.650 "keyring_file_remove_key", 00:23:28.650 "keyring_file_add_key", 00:23:28.650 "vfu_virtio_create_scsi_endpoint", 00:23:28.650 "vfu_virtio_scsi_remove_target", 00:23:28.650 "vfu_virtio_scsi_add_target", 00:23:28.650 "vfu_virtio_create_blk_endpoint", 00:23:28.650 "vfu_virtio_delete_endpoint", 00:23:28.650 "iaa_scan_accel_module", 00:23:28.650 "dsa_scan_accel_module", 00:23:28.650 "ioat_scan_accel_module", 00:23:28.650 "accel_error_inject_error", 00:23:28.650 "bdev_iscsi_delete", 00:23:28.650 "bdev_iscsi_create", 00:23:28.650 "bdev_iscsi_set_options", 00:23:28.650 "bdev_virtio_attach_controller", 00:23:28.650 "bdev_virtio_scsi_get_devices", 00:23:28.650 "bdev_virtio_detach_controller", 00:23:28.650 "bdev_virtio_blk_set_hotplug", 00:23:28.650 "bdev_ftl_set_property", 00:23:28.650 "bdev_ftl_get_properties", 00:23:28.650 "bdev_ftl_get_stats", 00:23:28.650 "bdev_ftl_unmap", 00:23:28.650 "bdev_ftl_unload", 00:23:28.650 "bdev_ftl_delete", 00:23:28.650 "bdev_ftl_load", 00:23:28.650 "bdev_ftl_create", 00:23:28.650 "bdev_aio_delete", 00:23:28.650 "bdev_aio_rescan", 00:23:28.650 "bdev_aio_create", 00:23:28.650 "blobfs_create", 00:23:28.650 "blobfs_detect", 00:23:28.650 "blobfs_set_cache_size", 00:23:28.650 "bdev_zone_block_delete", 00:23:28.650 "bdev_zone_block_create", 00:23:28.650 "bdev_delay_delete", 00:23:28.650 "bdev_delay_create", 00:23:28.650 "bdev_delay_update_latency", 00:23:28.650 "bdev_split_delete", 00:23:28.650 "bdev_split_create", 00:23:28.650 "bdev_error_inject_error", 00:23:28.650 "bdev_error_delete", 00:23:28.650 "bdev_error_create", 00:23:28.650 "bdev_raid_set_options", 00:23:28.650 "bdev_raid_remove_base_bdev", 00:23:28.650 "bdev_raid_add_base_bdev", 00:23:28.650 "bdev_raid_delete", 00:23:28.650 "bdev_raid_create", 00:23:28.650 "bdev_raid_get_bdevs", 00:23:28.650 "bdev_lvol_set_parent_bdev", 00:23:28.650 "bdev_lvol_set_parent", 00:23:28.650 "bdev_lvol_check_shallow_copy", 00:23:28.650 "bdev_lvol_start_shallow_copy", 00:23:28.650 "bdev_lvol_grow_lvstore", 00:23:28.650 "bdev_lvol_get_lvols", 00:23:28.650 "bdev_lvol_get_lvstores", 00:23:28.650 "bdev_lvol_delete", 00:23:28.650 "bdev_lvol_set_read_only", 00:23:28.650 "bdev_lvol_resize", 00:23:28.650 "bdev_lvol_decouple_parent", 00:23:28.650 "bdev_lvol_inflate", 00:23:28.650 "bdev_lvol_rename", 00:23:28.650 "bdev_lvol_clone_bdev", 00:23:28.650 "bdev_lvol_clone", 00:23:28.650 "bdev_lvol_snapshot", 00:23:28.650 "bdev_lvol_create", 00:23:28.650 "bdev_lvol_delete_lvstore", 00:23:28.650 "bdev_lvol_rename_lvstore", 00:23:28.650 "bdev_lvol_create_lvstore", 00:23:28.650 "bdev_passthru_delete", 00:23:28.650 "bdev_passthru_create", 00:23:28.650 "bdev_nvme_cuse_unregister", 00:23:28.650 "bdev_nvme_cuse_register", 00:23:28.650 "bdev_opal_new_user", 00:23:28.650 "bdev_opal_set_lock_state", 00:23:28.650 "bdev_opal_delete", 00:23:28.650 "bdev_opal_get_info", 00:23:28.650 "bdev_opal_create", 00:23:28.650 "bdev_nvme_opal_revert", 00:23:28.650 "bdev_nvme_opal_init", 00:23:28.650 "bdev_nvme_send_cmd", 00:23:28.650 "bdev_nvme_get_path_iostat", 00:23:28.650 "bdev_nvme_get_mdns_discovery_info", 00:23:28.650 "bdev_nvme_stop_mdns_discovery", 00:23:28.650 "bdev_nvme_start_mdns_discovery", 00:23:28.650 "bdev_nvme_set_multipath_policy", 00:23:28.650 "bdev_nvme_set_preferred_path", 00:23:28.650 "bdev_nvme_get_io_paths", 00:23:28.650 "bdev_nvme_remove_error_injection", 00:23:28.650 "bdev_nvme_add_error_injection", 00:23:28.650 "bdev_nvme_get_discovery_info", 00:23:28.650 "bdev_nvme_stop_discovery", 00:23:28.650 "bdev_nvme_start_discovery", 00:23:28.650 "bdev_nvme_get_controller_health_info", 00:23:28.650 "bdev_nvme_disable_controller", 00:23:28.650 "bdev_nvme_enable_controller", 00:23:28.650 "bdev_nvme_reset_controller", 00:23:28.650 "bdev_nvme_get_transport_statistics", 00:23:28.650 "bdev_nvme_apply_firmware", 00:23:28.650 "bdev_nvme_detach_controller", 00:23:28.650 "bdev_nvme_get_controllers", 00:23:28.650 "bdev_nvme_attach_controller", 00:23:28.650 "bdev_nvme_set_hotplug", 00:23:28.650 "bdev_nvme_set_options", 00:23:28.650 "bdev_null_resize", 00:23:28.650 "bdev_null_delete", 00:23:28.650 "bdev_null_create", 00:23:28.650 "bdev_malloc_delete", 00:23:28.650 "bdev_malloc_create" 00:23:28.650 ] 00:23:28.650 11:34:12 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:23:28.650 11:34:12 spdkcli_tcp -- common/autotest_common.sh@729 -- # xtrace_disable 00:23:28.650 11:34:12 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:28.650 11:34:12 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:23:28.650 11:34:12 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 3315029 00:23:28.650 11:34:12 spdkcli_tcp -- common/autotest_common.sh@949 -- # '[' -z 3315029 ']' 00:23:28.650 11:34:12 spdkcli_tcp -- common/autotest_common.sh@953 -- # kill -0 3315029 00:23:28.650 11:34:12 spdkcli_tcp -- common/autotest_common.sh@954 -- # uname 00:23:28.650 11:34:12 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:23:28.650 11:34:12 spdkcli_tcp -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3315029 00:23:28.650 11:34:12 spdkcli_tcp -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:23:28.650 11:34:12 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:23:28.650 11:34:12 spdkcli_tcp -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3315029' 00:23:28.650 killing process with pid 3315029 00:23:28.650 11:34:12 spdkcli_tcp -- common/autotest_common.sh@968 -- # kill 3315029 00:23:28.650 11:34:12 spdkcli_tcp -- common/autotest_common.sh@973 -- # wait 3315029 00:23:28.908 00:23:28.909 real 0m1.545s 00:23:28.909 user 0m2.746s 00:23:28.909 sys 0m0.524s 00:23:28.909 11:34:12 spdkcli_tcp -- common/autotest_common.sh@1125 -- # xtrace_disable 00:23:28.909 11:34:12 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:28.909 ************************************ 00:23:28.909 END TEST spdkcli_tcp 00:23:28.909 ************************************ 00:23:29.167 11:34:12 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:23:29.167 11:34:12 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:23:29.167 11:34:12 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:23:29.167 11:34:12 -- common/autotest_common.sh@10 -- # set +x 00:23:29.167 ************************************ 00:23:29.167 START TEST dpdk_mem_utility 00:23:29.167 ************************************ 00:23:29.167 11:34:12 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:23:29.167 * Looking for test storage... 00:23:29.167 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/dpdk_memory_utility 00:23:29.167 11:34:13 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:23:29.167 11:34:13 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=3315284 00:23:29.167 11:34:13 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 3315284 00:23:29.167 11:34:13 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:23:29.167 11:34:13 dpdk_mem_utility -- common/autotest_common.sh@830 -- # '[' -z 3315284 ']' 00:23:29.167 11:34:13 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:29.167 11:34:13 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local max_retries=100 00:23:29.167 11:34:13 dpdk_mem_utility -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:29.167 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:29.167 11:34:13 dpdk_mem_utility -- common/autotest_common.sh@839 -- # xtrace_disable 00:23:29.167 11:34:13 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:23:29.167 [2024-06-10 11:34:13.054113] Starting SPDK v24.09-pre git sha1 a3f6419f1 / DPDK 24.03.0 initialization... 00:23:29.167 [2024-06-10 11:34:13.054182] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3315284 ] 00:23:29.167 EAL: No free 2048 kB hugepages reported on node 1 00:23:29.426 [2024-06-10 11:34:13.131549] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:29.426 [2024-06-10 11:34:13.224021] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:23:30.104 11:34:13 dpdk_mem_utility -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:23:30.104 11:34:13 dpdk_mem_utility -- common/autotest_common.sh@863 -- # return 0 00:23:30.104 11:34:13 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:23:30.104 11:34:13 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:23:30.104 11:34:13 dpdk_mem_utility -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:30.104 11:34:13 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:23:30.104 { 00:23:30.104 "filename": "/tmp/spdk_mem_dump.txt" 00:23:30.104 } 00:23:30.104 11:34:13 dpdk_mem_utility -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:30.104 11:34:13 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:23:30.104 DPDK memory size 814.000000 MiB in 1 heap(s) 00:23:30.104 1 heaps totaling size 814.000000 MiB 00:23:30.104 size: 814.000000 MiB heap id: 0 00:23:30.104 end heaps---------- 00:23:30.104 8 mempools totaling size 598.116089 MiB 00:23:30.104 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:23:30.104 size: 158.602051 MiB name: PDU_data_out_Pool 00:23:30.104 size: 84.521057 MiB name: bdev_io_3315284 00:23:30.104 size: 51.011292 MiB name: evtpool_3315284 00:23:30.104 size: 50.003479 MiB name: msgpool_3315284 00:23:30.104 size: 21.763794 MiB name: PDU_Pool 00:23:30.104 size: 19.513306 MiB name: SCSI_TASK_Pool 00:23:30.104 size: 0.026123 MiB name: Session_Pool 00:23:30.104 end mempools------- 00:23:30.104 6 memzones totaling size 4.142822 MiB 00:23:30.104 size: 1.000366 MiB name: RG_ring_0_3315284 00:23:30.104 size: 1.000366 MiB name: RG_ring_1_3315284 00:23:30.104 size: 1.000366 MiB name: RG_ring_4_3315284 00:23:30.104 size: 1.000366 MiB name: RG_ring_5_3315284 00:23:30.104 size: 0.125366 MiB name: RG_ring_2_3315284 00:23:30.104 size: 0.015991 MiB name: RG_ring_3_3315284 00:23:30.104 end memzones------- 00:23:30.104 11:34:13 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:23:30.104 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:23:30.104 list of free elements. size: 12.519348 MiB 00:23:30.104 element at address: 0x200000400000 with size: 1.999512 MiB 00:23:30.104 element at address: 0x200018e00000 with size: 0.999878 MiB 00:23:30.104 element at address: 0x200019000000 with size: 0.999878 MiB 00:23:30.104 element at address: 0x200003e00000 with size: 0.996277 MiB 00:23:30.104 element at address: 0x200031c00000 with size: 0.994446 MiB 00:23:30.104 element at address: 0x200013800000 with size: 0.978699 MiB 00:23:30.104 element at address: 0x200007000000 with size: 0.959839 MiB 00:23:30.104 element at address: 0x200019200000 with size: 0.936584 MiB 00:23:30.104 element at address: 0x200000200000 with size: 0.841614 MiB 00:23:30.104 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:23:30.104 element at address: 0x20000b200000 with size: 0.490723 MiB 00:23:30.104 element at address: 0x200000800000 with size: 0.487793 MiB 00:23:30.104 element at address: 0x200019400000 with size: 0.485657 MiB 00:23:30.104 element at address: 0x200027e00000 with size: 0.410034 MiB 00:23:30.104 element at address: 0x200003a00000 with size: 0.355530 MiB 00:23:30.104 list of standard malloc elements. size: 199.218079 MiB 00:23:30.104 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:23:30.104 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:23:30.104 element at address: 0x200018efff80 with size: 1.000122 MiB 00:23:30.104 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:23:30.104 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:23:30.104 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:23:30.104 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:23:30.104 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:23:30.104 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:23:30.104 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:23:30.104 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:23:30.104 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:23:30.104 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:23:30.104 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:23:30.104 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:23:30.104 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:23:30.104 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:23:30.104 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:23:30.104 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:23:30.104 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:23:30.104 element at address: 0x200003adb300 with size: 0.000183 MiB 00:23:30.104 element at address: 0x200003adb500 with size: 0.000183 MiB 00:23:30.104 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:23:30.104 element at address: 0x200003affa80 with size: 0.000183 MiB 00:23:30.104 element at address: 0x200003affb40 with size: 0.000183 MiB 00:23:30.104 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:23:30.104 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:23:30.104 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:23:30.104 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:23:30.104 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:23:30.104 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:23:30.104 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:23:30.104 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:23:30.104 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:23:30.104 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:23:30.104 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:23:30.104 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:23:30.104 element at address: 0x200027e69040 with size: 0.000183 MiB 00:23:30.104 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:23:30.104 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:23:30.104 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:23:30.104 list of memzone associated elements. size: 602.262573 MiB 00:23:30.104 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:23:30.104 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:23:30.104 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:23:30.104 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:23:30.104 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:23:30.104 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_3315284_0 00:23:30.104 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:23:30.104 associated memzone info: size: 48.002930 MiB name: MP_evtpool_3315284_0 00:23:30.104 element at address: 0x200003fff380 with size: 48.003052 MiB 00:23:30.104 associated memzone info: size: 48.002930 MiB name: MP_msgpool_3315284_0 00:23:30.104 element at address: 0x2000195be940 with size: 20.255554 MiB 00:23:30.104 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:23:30.104 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:23:30.104 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:23:30.104 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:23:30.104 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_3315284 00:23:30.104 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:23:30.104 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_3315284 00:23:30.104 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:23:30.104 associated memzone info: size: 1.007996 MiB name: MP_evtpool_3315284 00:23:30.104 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:23:30.104 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:23:30.104 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:23:30.104 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:23:30.105 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:23:30.105 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:23:30.105 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:23:30.105 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:23:30.105 element at address: 0x200003eff180 with size: 1.000488 MiB 00:23:30.105 associated memzone info: size: 1.000366 MiB name: RG_ring_0_3315284 00:23:30.105 element at address: 0x200003affc00 with size: 1.000488 MiB 00:23:30.105 associated memzone info: size: 1.000366 MiB name: RG_ring_1_3315284 00:23:30.105 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:23:30.105 associated memzone info: size: 1.000366 MiB name: RG_ring_4_3315284 00:23:30.105 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:23:30.105 associated memzone info: size: 1.000366 MiB name: RG_ring_5_3315284 00:23:30.105 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:23:30.105 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_3315284 00:23:30.105 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:23:30.105 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:23:30.105 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:23:30.105 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:23:30.105 element at address: 0x20001947c540 with size: 0.250488 MiB 00:23:30.105 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:23:30.105 element at address: 0x200003adf880 with size: 0.125488 MiB 00:23:30.105 associated memzone info: size: 0.125366 MiB name: RG_ring_2_3315284 00:23:30.105 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:23:30.105 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:23:30.105 element at address: 0x200027e69100 with size: 0.023743 MiB 00:23:30.105 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:23:30.105 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:23:30.105 associated memzone info: size: 0.015991 MiB name: RG_ring_3_3315284 00:23:30.105 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:23:30.105 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:23:30.105 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:23:30.105 associated memzone info: size: 0.000183 MiB name: MP_msgpool_3315284 00:23:30.105 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:23:30.105 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_3315284 00:23:30.105 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:23:30.105 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:23:30.105 11:34:13 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:23:30.105 11:34:13 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 3315284 00:23:30.105 11:34:13 dpdk_mem_utility -- common/autotest_common.sh@949 -- # '[' -z 3315284 ']' 00:23:30.105 11:34:13 dpdk_mem_utility -- common/autotest_common.sh@953 -- # kill -0 3315284 00:23:30.105 11:34:13 dpdk_mem_utility -- common/autotest_common.sh@954 -- # uname 00:23:30.105 11:34:13 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:23:30.105 11:34:13 dpdk_mem_utility -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3315284 00:23:30.105 11:34:14 dpdk_mem_utility -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:23:30.105 11:34:14 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:23:30.105 11:34:14 dpdk_mem_utility -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3315284' 00:23:30.105 killing process with pid 3315284 00:23:30.105 11:34:14 dpdk_mem_utility -- common/autotest_common.sh@968 -- # kill 3315284 00:23:30.105 11:34:14 dpdk_mem_utility -- common/autotest_common.sh@973 -- # wait 3315284 00:23:30.672 00:23:30.672 real 0m1.451s 00:23:30.672 user 0m1.481s 00:23:30.672 sys 0m0.442s 00:23:30.672 11:34:14 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # xtrace_disable 00:23:30.672 11:34:14 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:23:30.672 ************************************ 00:23:30.672 END TEST dpdk_mem_utility 00:23:30.672 ************************************ 00:23:30.672 11:34:14 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event.sh 00:23:30.672 11:34:14 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:23:30.672 11:34:14 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:23:30.672 11:34:14 -- common/autotest_common.sh@10 -- # set +x 00:23:30.672 ************************************ 00:23:30.672 START TEST event 00:23:30.672 ************************************ 00:23:30.672 11:34:14 event -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event.sh 00:23:30.672 * Looking for test storage... 00:23:30.672 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event 00:23:30.672 11:34:14 event -- event/event.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/bdev/nbd_common.sh 00:23:30.672 11:34:14 event -- bdev/nbd_common.sh@6 -- # set -e 00:23:30.672 11:34:14 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:23:30.672 11:34:14 event -- common/autotest_common.sh@1100 -- # '[' 6 -le 1 ']' 00:23:30.672 11:34:14 event -- common/autotest_common.sh@1106 -- # xtrace_disable 00:23:30.672 11:34:14 event -- common/autotest_common.sh@10 -- # set +x 00:23:30.672 ************************************ 00:23:30.672 START TEST event_perf 00:23:30.672 ************************************ 00:23:30.672 11:34:14 event.event_perf -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:23:30.672 Running I/O for 1 seconds...[2024-06-10 11:34:14.606344] Starting SPDK v24.09-pre git sha1 a3f6419f1 / DPDK 24.03.0 initialization... 00:23:30.672 [2024-06-10 11:34:14.606441] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3315528 ] 00:23:30.930 EAL: No free 2048 kB hugepages reported on node 1 00:23:30.930 [2024-06-10 11:34:14.689819] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:30.930 [2024-06-10 11:34:14.779574] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:23:30.930 [2024-06-10 11:34:14.779661] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:23:30.930 [2024-06-10 11:34:14.779738] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:23:30.930 [2024-06-10 11:34:14.779740] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:23:32.306 Running I/O for 1 seconds... 00:23:32.306 lcore 0: 194734 00:23:32.306 lcore 1: 194732 00:23:32.306 lcore 2: 194731 00:23:32.306 lcore 3: 194734 00:23:32.306 done. 00:23:32.306 00:23:32.306 real 0m1.273s 00:23:32.306 user 0m4.161s 00:23:32.306 sys 0m0.107s 00:23:32.306 11:34:15 event.event_perf -- common/autotest_common.sh@1125 -- # xtrace_disable 00:23:32.306 11:34:15 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:23:32.306 ************************************ 00:23:32.306 END TEST event_perf 00:23:32.306 ************************************ 00:23:32.306 11:34:15 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:23:32.306 11:34:15 event -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:23:32.306 11:34:15 event -- common/autotest_common.sh@1106 -- # xtrace_disable 00:23:32.306 11:34:15 event -- common/autotest_common.sh@10 -- # set +x 00:23:32.306 ************************************ 00:23:32.306 START TEST event_reactor 00:23:32.306 ************************************ 00:23:32.306 11:34:15 event.event_reactor -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:23:32.306 [2024-06-10 11:34:15.963613] Starting SPDK v24.09-pre git sha1 a3f6419f1 / DPDK 24.03.0 initialization... 00:23:32.306 [2024-06-10 11:34:15.963726] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3315725 ] 00:23:32.306 EAL: No free 2048 kB hugepages reported on node 1 00:23:32.306 [2024-06-10 11:34:16.048127] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:32.306 [2024-06-10 11:34:16.137926] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:23:33.685 test_start 00:23:33.685 oneshot 00:23:33.685 tick 100 00:23:33.685 tick 100 00:23:33.685 tick 250 00:23:33.685 tick 100 00:23:33.685 tick 100 00:23:33.685 tick 100 00:23:33.685 tick 250 00:23:33.685 tick 500 00:23:33.685 tick 100 00:23:33.685 tick 100 00:23:33.685 tick 250 00:23:33.685 tick 100 00:23:33.685 tick 100 00:23:33.685 test_end 00:23:33.685 00:23:33.685 real 0m1.275s 00:23:33.685 user 0m1.173s 00:23:33.685 sys 0m0.097s 00:23:33.685 11:34:17 event.event_reactor -- common/autotest_common.sh@1125 -- # xtrace_disable 00:23:33.685 11:34:17 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:23:33.685 ************************************ 00:23:33.685 END TEST event_reactor 00:23:33.685 ************************************ 00:23:33.685 11:34:17 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:23:33.685 11:34:17 event -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:23:33.685 11:34:17 event -- common/autotest_common.sh@1106 -- # xtrace_disable 00:23:33.685 11:34:17 event -- common/autotest_common.sh@10 -- # set +x 00:23:33.685 ************************************ 00:23:33.685 START TEST event_reactor_perf 00:23:33.685 ************************************ 00:23:33.685 11:34:17 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:23:33.685 [2024-06-10 11:34:17.311156] Starting SPDK v24.09-pre git sha1 a3f6419f1 / DPDK 24.03.0 initialization... 00:23:33.686 [2024-06-10 11:34:17.311237] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3315919 ] 00:23:33.686 EAL: No free 2048 kB hugepages reported on node 1 00:23:33.686 [2024-06-10 11:34:17.391571] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:33.686 [2024-06-10 11:34:17.478204] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:23:34.623 test_start 00:23:34.623 test_end 00:23:34.623 Performance: 975675 events per second 00:23:34.623 00:23:34.623 real 0m1.264s 00:23:34.623 user 0m1.155s 00:23:34.623 sys 0m0.105s 00:23:34.623 11:34:18 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # xtrace_disable 00:23:34.623 11:34:18 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:23:34.623 ************************************ 00:23:34.623 END TEST event_reactor_perf 00:23:34.623 ************************************ 00:23:34.882 11:34:18 event -- event/event.sh@49 -- # uname -s 00:23:34.882 11:34:18 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:23:34.882 11:34:18 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:23:34.882 11:34:18 event -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:23:34.882 11:34:18 event -- common/autotest_common.sh@1106 -- # xtrace_disable 00:23:34.882 11:34:18 event -- common/autotest_common.sh@10 -- # set +x 00:23:34.882 ************************************ 00:23:34.882 START TEST event_scheduler 00:23:34.882 ************************************ 00:23:34.882 11:34:18 event.event_scheduler -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:23:34.882 * Looking for test storage... 00:23:34.882 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler 00:23:34.882 11:34:18 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:23:34.882 11:34:18 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=3316144 00:23:34.882 11:34:18 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:23:34.882 11:34:18 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:23:34.882 11:34:18 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 3316144 00:23:34.882 11:34:18 event.event_scheduler -- common/autotest_common.sh@830 -- # '[' -z 3316144 ']' 00:23:34.882 11:34:18 event.event_scheduler -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:34.882 11:34:18 event.event_scheduler -- common/autotest_common.sh@835 -- # local max_retries=100 00:23:34.882 11:34:18 event.event_scheduler -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:34.882 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:34.882 11:34:18 event.event_scheduler -- common/autotest_common.sh@839 -- # xtrace_disable 00:23:34.882 11:34:18 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:23:34.882 [2024-06-10 11:34:18.767593] Starting SPDK v24.09-pre git sha1 a3f6419f1 / DPDK 24.03.0 initialization... 00:23:34.882 [2024-06-10 11:34:18.767644] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3316144 ] 00:23:34.882 EAL: No free 2048 kB hugepages reported on node 1 00:23:35.141 [2024-06-10 11:34:18.841651] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:35.141 [2024-06-10 11:34:18.939739] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:23:35.141 [2024-06-10 11:34:18.939759] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:23:35.141 [2024-06-10 11:34:18.939841] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:23:35.141 [2024-06-10 11:34:18.939844] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:23:35.709 11:34:19 event.event_scheduler -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:23:35.709 11:34:19 event.event_scheduler -- common/autotest_common.sh@863 -- # return 0 00:23:35.709 11:34:19 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:23:35.710 11:34:19 event.event_scheduler -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:35.710 11:34:19 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:23:35.710 POWER: Env isn't set yet! 00:23:35.710 POWER: Attempting to initialise ACPI cpufreq power management... 00:23:35.710 POWER: Failed to write /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:23:35.710 POWER: Cannot set governor of lcore 0 to userspace 00:23:35.710 POWER: Attempting to initialise PSTAT power management... 00:23:35.710 POWER: Power management governor of lcore 0 has been set to 'performance' successfully 00:23:35.710 POWER: Initialized successfully for lcore 0 power management 00:23:35.710 POWER: Power management governor of lcore 1 has been set to 'performance' successfully 00:23:35.710 POWER: Initialized successfully for lcore 1 power management 00:23:35.969 POWER: Power management governor of lcore 2 has been set to 'performance' successfully 00:23:35.969 POWER: Initialized successfully for lcore 2 power management 00:23:35.969 POWER: Power management governor of lcore 3 has been set to 'performance' successfully 00:23:35.969 POWER: Initialized successfully for lcore 3 power management 00:23:35.969 [2024-06-10 11:34:19.675422] scheduler_dynamic.c: 382:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:23:35.969 [2024-06-10 11:34:19.675439] scheduler_dynamic.c: 384:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:23:35.969 [2024-06-10 11:34:19.675451] scheduler_dynamic.c: 386:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:23:35.969 11:34:19 event.event_scheduler -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:35.969 11:34:19 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:23:35.969 11:34:19 event.event_scheduler -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:35.969 11:34:19 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:23:35.969 [2024-06-10 11:34:19.751254] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:23:35.969 11:34:19 event.event_scheduler -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:35.969 11:34:19 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:23:35.969 11:34:19 event.event_scheduler -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:23:35.969 11:34:19 event.event_scheduler -- common/autotest_common.sh@1106 -- # xtrace_disable 00:23:35.969 11:34:19 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:23:35.969 ************************************ 00:23:35.969 START TEST scheduler_create_thread 00:23:35.969 ************************************ 00:23:35.969 11:34:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # scheduler_create_thread 00:23:35.969 11:34:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:23:35.969 11:34:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:35.969 11:34:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:23:35.969 2 00:23:35.969 11:34:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:35.970 11:34:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:23:35.970 11:34:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:35.970 11:34:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:23:35.970 3 00:23:35.970 11:34:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:35.970 11:34:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:23:35.970 11:34:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:35.970 11:34:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:23:35.970 4 00:23:35.970 11:34:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:35.970 11:34:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:23:35.970 11:34:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:35.970 11:34:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:23:35.970 5 00:23:35.970 11:34:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:35.970 11:34:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:23:35.970 11:34:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:35.970 11:34:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:23:35.970 6 00:23:35.970 11:34:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:35.970 11:34:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:23:35.970 11:34:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:35.970 11:34:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:23:35.970 7 00:23:35.970 11:34:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:35.970 11:34:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:23:35.970 11:34:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:35.970 11:34:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:23:35.970 8 00:23:35.970 11:34:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:35.970 11:34:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:23:35.970 11:34:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:35.970 11:34:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:23:35.970 9 00:23:35.970 11:34:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:35.970 11:34:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:23:35.970 11:34:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:35.970 11:34:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:23:35.970 10 00:23:35.970 11:34:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:35.970 11:34:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:23:35.970 11:34:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:35.970 11:34:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:23:36.537 11:34:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:36.537 11:34:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:23:36.537 11:34:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:23:36.537 11:34:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:36.537 11:34:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:23:37.472 11:34:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:37.472 11:34:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:23:37.472 11:34:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:37.472 11:34:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:23:38.407 11:34:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:38.407 11:34:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:23:38.407 11:34:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:23:38.407 11:34:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:38.407 11:34:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:23:39.343 11:34:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:39.343 00:23:39.343 real 0m3.232s 00:23:39.343 user 0m0.025s 00:23:39.343 sys 0m0.007s 00:23:39.343 11:34:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # xtrace_disable 00:23:39.343 11:34:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:23:39.343 ************************************ 00:23:39.343 END TEST scheduler_create_thread 00:23:39.343 ************************************ 00:23:39.343 11:34:23 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:23:39.343 11:34:23 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 3316144 00:23:39.343 11:34:23 event.event_scheduler -- common/autotest_common.sh@949 -- # '[' -z 3316144 ']' 00:23:39.343 11:34:23 event.event_scheduler -- common/autotest_common.sh@953 -- # kill -0 3316144 00:23:39.343 11:34:23 event.event_scheduler -- common/autotest_common.sh@954 -- # uname 00:23:39.343 11:34:23 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:23:39.343 11:34:23 event.event_scheduler -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3316144 00:23:39.343 11:34:23 event.event_scheduler -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:23:39.343 11:34:23 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:23:39.343 11:34:23 event.event_scheduler -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3316144' 00:23:39.343 killing process with pid 3316144 00:23:39.343 11:34:23 event.event_scheduler -- common/autotest_common.sh@968 -- # kill 3316144 00:23:39.343 11:34:23 event.event_scheduler -- common/autotest_common.sh@973 -- # wait 3316144 00:23:39.602 [2024-06-10 11:34:23.408544] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:23:39.602 POWER: Power management governor of lcore 0 has been set to 'powersave' successfully 00:23:39.602 POWER: Power management of lcore 0 has exited from 'performance' mode and been set back to the original 00:23:39.602 POWER: Power management governor of lcore 1 has been set to 'powersave' successfully 00:23:39.602 POWER: Power management of lcore 1 has exited from 'performance' mode and been set back to the original 00:23:39.602 POWER: Power management governor of lcore 2 has been set to 'powersave' successfully 00:23:39.602 POWER: Power management of lcore 2 has exited from 'performance' mode and been set back to the original 00:23:39.602 POWER: Power management governor of lcore 3 has been set to 'powersave' successfully 00:23:39.602 POWER: Power management of lcore 3 has exited from 'performance' mode and been set back to the original 00:23:39.861 00:23:39.861 real 0m5.037s 00:23:39.861 user 0m10.486s 00:23:39.861 sys 0m0.448s 00:23:39.861 11:34:23 event.event_scheduler -- common/autotest_common.sh@1125 -- # xtrace_disable 00:23:39.861 11:34:23 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:23:39.861 ************************************ 00:23:39.861 END TEST event_scheduler 00:23:39.861 ************************************ 00:23:39.861 11:34:23 event -- event/event.sh@51 -- # modprobe -n nbd 00:23:39.861 11:34:23 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:23:39.861 11:34:23 event -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:23:39.861 11:34:23 event -- common/autotest_common.sh@1106 -- # xtrace_disable 00:23:39.861 11:34:23 event -- common/autotest_common.sh@10 -- # set +x 00:23:39.861 ************************************ 00:23:39.861 START TEST app_repeat 00:23:39.861 ************************************ 00:23:39.861 11:34:23 event.app_repeat -- common/autotest_common.sh@1124 -- # app_repeat_test 00:23:39.861 11:34:23 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:39.861 11:34:23 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:23:39.861 11:34:23 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:23:39.861 11:34:23 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:23:39.861 11:34:23 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:23:39.861 11:34:23 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:23:39.861 11:34:23 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:23:39.861 11:34:23 event.app_repeat -- event/event.sh@19 -- # repeat_pid=3316894 00:23:39.861 11:34:23 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:23:39.861 11:34:23 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 3316894' 00:23:39.861 Process app_repeat pid: 3316894 00:23:39.861 11:34:23 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:23:39.861 11:34:23 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:23:39.861 spdk_app_start Round 0 00:23:39.861 11:34:23 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3316894 /var/tmp/spdk-nbd.sock 00:23:39.861 11:34:23 event.app_repeat -- common/autotest_common.sh@830 -- # '[' -z 3316894 ']' 00:23:39.861 11:34:23 event.app_repeat -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:23:39.861 11:34:23 event.app_repeat -- common/autotest_common.sh@835 -- # local max_retries=100 00:23:39.861 11:34:23 event.app_repeat -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:23:39.861 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:23:39.861 11:34:23 event.app_repeat -- common/autotest_common.sh@839 -- # xtrace_disable 00:23:39.861 11:34:23 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:23:39.861 11:34:23 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:23:39.861 [2024-06-10 11:34:23.788930] Starting SPDK v24.09-pre git sha1 a3f6419f1 / DPDK 24.03.0 initialization... 00:23:39.862 [2024-06-10 11:34:23.789015] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3316894 ] 00:23:40.120 EAL: No free 2048 kB hugepages reported on node 1 00:23:40.120 [2024-06-10 11:34:23.869719] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:40.120 [2024-06-10 11:34:23.963904] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:23:40.120 [2024-06-10 11:34:23.963908] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:23:40.687 11:34:24 event.app_repeat -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:23:40.687 11:34:24 event.app_repeat -- common/autotest_common.sh@863 -- # return 0 00:23:40.687 11:34:24 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:23:40.946 Malloc0 00:23:40.946 11:34:24 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:23:41.204 Malloc1 00:23:41.204 11:34:25 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:23:41.204 11:34:25 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:41.204 11:34:25 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:23:41.204 11:34:25 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:23:41.204 11:34:25 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:23:41.204 11:34:25 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:23:41.204 11:34:25 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:23:41.204 11:34:25 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:41.204 11:34:25 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:23:41.204 11:34:25 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:41.204 11:34:25 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:23:41.204 11:34:25 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:41.204 11:34:25 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:23:41.204 11:34:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:41.204 11:34:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:41.204 11:34:25 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:23:41.462 /dev/nbd0 00:23:41.462 11:34:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:23:41.462 11:34:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:23:41.462 11:34:25 event.app_repeat -- common/autotest_common.sh@867 -- # local nbd_name=nbd0 00:23:41.462 11:34:25 event.app_repeat -- common/autotest_common.sh@868 -- # local i 00:23:41.462 11:34:25 event.app_repeat -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:23:41.462 11:34:25 event.app_repeat -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:23:41.462 11:34:25 event.app_repeat -- common/autotest_common.sh@871 -- # grep -q -w nbd0 /proc/partitions 00:23:41.462 11:34:25 event.app_repeat -- common/autotest_common.sh@872 -- # break 00:23:41.462 11:34:25 event.app_repeat -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:23:41.462 11:34:25 event.app_repeat -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:23:41.462 11:34:25 event.app_repeat -- common/autotest_common.sh@884 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:23:41.462 1+0 records in 00:23:41.462 1+0 records out 00:23:41.462 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000243637 s, 16.8 MB/s 00:23:41.462 11:34:25 event.app_repeat -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:23:41.462 11:34:25 event.app_repeat -- common/autotest_common.sh@885 -- # size=4096 00:23:41.462 11:34:25 event.app_repeat -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:23:41.462 11:34:25 event.app_repeat -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:23:41.462 11:34:25 event.app_repeat -- common/autotest_common.sh@888 -- # return 0 00:23:41.462 11:34:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:41.462 11:34:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:41.462 11:34:25 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:23:41.462 /dev/nbd1 00:23:41.462 11:34:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:23:41.462 11:34:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:23:41.462 11:34:25 event.app_repeat -- common/autotest_common.sh@867 -- # local nbd_name=nbd1 00:23:41.462 11:34:25 event.app_repeat -- common/autotest_common.sh@868 -- # local i 00:23:41.462 11:34:25 event.app_repeat -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:23:41.462 11:34:25 event.app_repeat -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:23:41.462 11:34:25 event.app_repeat -- common/autotest_common.sh@871 -- # grep -q -w nbd1 /proc/partitions 00:23:41.719 11:34:25 event.app_repeat -- common/autotest_common.sh@872 -- # break 00:23:41.719 11:34:25 event.app_repeat -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:23:41.719 11:34:25 event.app_repeat -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:23:41.719 11:34:25 event.app_repeat -- common/autotest_common.sh@884 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:23:41.719 1+0 records in 00:23:41.719 1+0 records out 00:23:41.719 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000266338 s, 15.4 MB/s 00:23:41.719 11:34:25 event.app_repeat -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:23:41.719 11:34:25 event.app_repeat -- common/autotest_common.sh@885 -- # size=4096 00:23:41.719 11:34:25 event.app_repeat -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:23:41.719 11:34:25 event.app_repeat -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:23:41.719 11:34:25 event.app_repeat -- common/autotest_common.sh@888 -- # return 0 00:23:41.719 11:34:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:41.719 11:34:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:41.720 11:34:25 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:23:41.720 11:34:25 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:41.720 11:34:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:23:41.720 11:34:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:23:41.720 { 00:23:41.720 "nbd_device": "/dev/nbd0", 00:23:41.720 "bdev_name": "Malloc0" 00:23:41.720 }, 00:23:41.720 { 00:23:41.720 "nbd_device": "/dev/nbd1", 00:23:41.720 "bdev_name": "Malloc1" 00:23:41.720 } 00:23:41.720 ]' 00:23:41.720 11:34:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:23:41.720 { 00:23:41.720 "nbd_device": "/dev/nbd0", 00:23:41.720 "bdev_name": "Malloc0" 00:23:41.720 }, 00:23:41.720 { 00:23:41.720 "nbd_device": "/dev/nbd1", 00:23:41.720 "bdev_name": "Malloc1" 00:23:41.720 } 00:23:41.720 ]' 00:23:41.720 11:34:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:23:41.720 11:34:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:23:41.720 /dev/nbd1' 00:23:41.720 11:34:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:23:41.720 /dev/nbd1' 00:23:41.720 11:34:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:23:41.720 11:34:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:23:41.720 11:34:25 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:23:41.720 11:34:25 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:23:41.720 11:34:25 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:23:41.720 11:34:25 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:23:41.720 11:34:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:23:41.720 11:34:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:23:41.720 11:34:25 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:23:41.720 11:34:25 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:23:41.720 11:34:25 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:23:41.720 11:34:25 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:23:41.978 256+0 records in 00:23:41.978 256+0 records out 00:23:41.978 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0114179 s, 91.8 MB/s 00:23:41.978 11:34:25 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:23:41.978 11:34:25 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:23:41.978 256+0 records in 00:23:41.978 256+0 records out 00:23:41.978 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.020697 s, 50.7 MB/s 00:23:41.978 11:34:25 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:23:41.978 11:34:25 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:23:41.978 256+0 records in 00:23:41.978 256+0 records out 00:23:41.978 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0226573 s, 46.3 MB/s 00:23:41.978 11:34:25 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:23:41.978 11:34:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:23:41.978 11:34:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:23:41.978 11:34:25 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:23:41.978 11:34:25 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:23:41.978 11:34:25 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:23:41.978 11:34:25 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:23:41.978 11:34:25 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:23:41.978 11:34:25 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:23:41.978 11:34:25 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:23:41.978 11:34:25 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:23:41.978 11:34:25 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:23:41.978 11:34:25 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:23:41.978 11:34:25 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:41.978 11:34:25 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:23:41.978 11:34:25 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:41.978 11:34:25 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:23:41.978 11:34:25 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:41.978 11:34:25 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:23:42.237 11:34:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:42.237 11:34:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:42.237 11:34:25 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:42.237 11:34:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:42.237 11:34:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:42.237 11:34:25 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:42.237 11:34:25 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:23:42.237 11:34:25 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:23:42.237 11:34:25 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:42.237 11:34:25 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:23:42.237 11:34:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:23:42.237 11:34:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:23:42.237 11:34:26 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:23:42.237 11:34:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:42.237 11:34:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:42.237 11:34:26 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:23:42.237 11:34:26 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:23:42.237 11:34:26 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:23:42.237 11:34:26 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:23:42.237 11:34:26 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:42.237 11:34:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:23:42.495 11:34:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:23:42.495 11:34:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:23:42.495 11:34:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:23:42.495 11:34:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:23:42.495 11:34:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:23:42.495 11:34:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:23:42.495 11:34:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:23:42.495 11:34:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:23:42.495 11:34:26 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:23:42.495 11:34:26 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:23:42.495 11:34:26 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:23:42.495 11:34:26 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:23:42.495 11:34:26 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:23:42.754 11:34:26 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:23:43.013 [2024-06-10 11:34:26.780504] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:43.013 [2024-06-10 11:34:26.864888] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:23:43.013 [2024-06-10 11:34:26.864891] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:23:43.013 [2024-06-10 11:34:26.911542] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:23:43.013 [2024-06-10 11:34:26.911592] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:23:46.302 11:34:29 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:23:46.302 11:34:29 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:23:46.302 spdk_app_start Round 1 00:23:46.302 11:34:29 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3316894 /var/tmp/spdk-nbd.sock 00:23:46.302 11:34:29 event.app_repeat -- common/autotest_common.sh@830 -- # '[' -z 3316894 ']' 00:23:46.302 11:34:29 event.app_repeat -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:23:46.302 11:34:29 event.app_repeat -- common/autotest_common.sh@835 -- # local max_retries=100 00:23:46.302 11:34:29 event.app_repeat -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:23:46.302 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:23:46.302 11:34:29 event.app_repeat -- common/autotest_common.sh@839 -- # xtrace_disable 00:23:46.302 11:34:29 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:23:46.302 11:34:29 event.app_repeat -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:23:46.302 11:34:29 event.app_repeat -- common/autotest_common.sh@863 -- # return 0 00:23:46.302 11:34:29 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:23:46.302 Malloc0 00:23:46.302 11:34:29 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:23:46.302 Malloc1 00:23:46.302 11:34:30 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:23:46.302 11:34:30 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:46.302 11:34:30 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:23:46.302 11:34:30 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:23:46.302 11:34:30 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:23:46.302 11:34:30 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:23:46.302 11:34:30 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:23:46.302 11:34:30 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:46.302 11:34:30 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:23:46.302 11:34:30 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:46.302 11:34:30 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:23:46.302 11:34:30 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:46.302 11:34:30 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:23:46.302 11:34:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:46.302 11:34:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:46.302 11:34:30 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:23:46.562 /dev/nbd0 00:23:46.562 11:34:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:23:46.562 11:34:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:23:46.562 11:34:30 event.app_repeat -- common/autotest_common.sh@867 -- # local nbd_name=nbd0 00:23:46.562 11:34:30 event.app_repeat -- common/autotest_common.sh@868 -- # local i 00:23:46.562 11:34:30 event.app_repeat -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:23:46.562 11:34:30 event.app_repeat -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:23:46.562 11:34:30 event.app_repeat -- common/autotest_common.sh@871 -- # grep -q -w nbd0 /proc/partitions 00:23:46.562 11:34:30 event.app_repeat -- common/autotest_common.sh@872 -- # break 00:23:46.562 11:34:30 event.app_repeat -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:23:46.562 11:34:30 event.app_repeat -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:23:46.562 11:34:30 event.app_repeat -- common/autotest_common.sh@884 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:23:46.562 1+0 records in 00:23:46.562 1+0 records out 00:23:46.562 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000247041 s, 16.6 MB/s 00:23:46.562 11:34:30 event.app_repeat -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:23:46.562 11:34:30 event.app_repeat -- common/autotest_common.sh@885 -- # size=4096 00:23:46.562 11:34:30 event.app_repeat -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:23:46.562 11:34:30 event.app_repeat -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:23:46.562 11:34:30 event.app_repeat -- common/autotest_common.sh@888 -- # return 0 00:23:46.562 11:34:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:46.562 11:34:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:46.562 11:34:30 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:23:46.562 /dev/nbd1 00:23:46.821 11:34:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:23:46.821 11:34:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:23:46.821 11:34:30 event.app_repeat -- common/autotest_common.sh@867 -- # local nbd_name=nbd1 00:23:46.821 11:34:30 event.app_repeat -- common/autotest_common.sh@868 -- # local i 00:23:46.821 11:34:30 event.app_repeat -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:23:46.821 11:34:30 event.app_repeat -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:23:46.821 11:34:30 event.app_repeat -- common/autotest_common.sh@871 -- # grep -q -w nbd1 /proc/partitions 00:23:46.821 11:34:30 event.app_repeat -- common/autotest_common.sh@872 -- # break 00:23:46.821 11:34:30 event.app_repeat -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:23:46.821 11:34:30 event.app_repeat -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:23:46.821 11:34:30 event.app_repeat -- common/autotest_common.sh@884 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:23:46.821 1+0 records in 00:23:46.821 1+0 records out 00:23:46.821 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000181008 s, 22.6 MB/s 00:23:46.821 11:34:30 event.app_repeat -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:23:46.821 11:34:30 event.app_repeat -- common/autotest_common.sh@885 -- # size=4096 00:23:46.821 11:34:30 event.app_repeat -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:23:46.821 11:34:30 event.app_repeat -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:23:46.821 11:34:30 event.app_repeat -- common/autotest_common.sh@888 -- # return 0 00:23:46.821 11:34:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:46.821 11:34:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:46.821 11:34:30 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:23:46.821 11:34:30 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:46.821 11:34:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:23:46.821 11:34:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:23:46.821 { 00:23:46.821 "nbd_device": "/dev/nbd0", 00:23:46.821 "bdev_name": "Malloc0" 00:23:46.821 }, 00:23:46.821 { 00:23:46.821 "nbd_device": "/dev/nbd1", 00:23:46.821 "bdev_name": "Malloc1" 00:23:46.821 } 00:23:46.821 ]' 00:23:46.821 11:34:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:23:46.821 { 00:23:46.821 "nbd_device": "/dev/nbd0", 00:23:46.821 "bdev_name": "Malloc0" 00:23:46.821 }, 00:23:46.821 { 00:23:46.821 "nbd_device": "/dev/nbd1", 00:23:46.821 "bdev_name": "Malloc1" 00:23:46.821 } 00:23:46.821 ]' 00:23:46.821 11:34:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:23:47.080 11:34:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:23:47.080 /dev/nbd1' 00:23:47.080 11:34:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:23:47.080 11:34:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:23:47.080 /dev/nbd1' 00:23:47.080 11:34:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:23:47.080 11:34:30 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:23:47.080 11:34:30 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:23:47.080 11:34:30 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:23:47.080 11:34:30 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:23:47.080 11:34:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:23:47.080 11:34:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:23:47.080 11:34:30 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:23:47.080 11:34:30 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:23:47.080 11:34:30 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:23:47.080 11:34:30 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:23:47.080 256+0 records in 00:23:47.080 256+0 records out 00:23:47.080 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0109425 s, 95.8 MB/s 00:23:47.080 11:34:30 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:23:47.080 11:34:30 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:23:47.080 256+0 records in 00:23:47.080 256+0 records out 00:23:47.080 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0208998 s, 50.2 MB/s 00:23:47.080 11:34:30 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:23:47.080 11:34:30 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:23:47.080 256+0 records in 00:23:47.080 256+0 records out 00:23:47.080 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0221484 s, 47.3 MB/s 00:23:47.080 11:34:30 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:23:47.080 11:34:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:23:47.080 11:34:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:23:47.080 11:34:30 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:23:47.080 11:34:30 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:23:47.080 11:34:30 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:23:47.080 11:34:30 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:23:47.080 11:34:30 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:23:47.080 11:34:30 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:23:47.080 11:34:30 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:23:47.080 11:34:30 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:23:47.080 11:34:30 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:23:47.080 11:34:30 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:23:47.080 11:34:30 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:47.080 11:34:30 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:23:47.080 11:34:30 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:47.080 11:34:30 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:23:47.080 11:34:30 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:47.080 11:34:30 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:23:47.339 11:34:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:47.339 11:34:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:47.339 11:34:31 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:47.339 11:34:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:47.339 11:34:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:47.339 11:34:31 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:47.339 11:34:31 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:23:47.339 11:34:31 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:23:47.339 11:34:31 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:47.339 11:34:31 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:23:47.339 11:34:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:23:47.339 11:34:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:23:47.339 11:34:31 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:23:47.339 11:34:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:47.339 11:34:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:47.339 11:34:31 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:23:47.339 11:34:31 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:23:47.339 11:34:31 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:23:47.339 11:34:31 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:23:47.339 11:34:31 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:47.339 11:34:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:23:47.598 11:34:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:23:47.598 11:34:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:23:47.598 11:34:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:23:47.598 11:34:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:23:47.598 11:34:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:23:47.598 11:34:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:23:47.598 11:34:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:23:47.598 11:34:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:23:47.598 11:34:31 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:23:47.598 11:34:31 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:23:47.598 11:34:31 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:23:47.598 11:34:31 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:23:47.598 11:34:31 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:23:47.856 11:34:31 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:23:48.115 [2024-06-10 11:34:31.911627] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:48.115 [2024-06-10 11:34:31.996777] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:23:48.115 [2024-06-10 11:34:31.996778] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:23:48.115 [2024-06-10 11:34:32.042742] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:23:48.115 [2024-06-10 11:34:32.042791] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:23:51.399 11:34:34 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:23:51.399 11:34:34 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:23:51.399 spdk_app_start Round 2 00:23:51.399 11:34:34 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3316894 /var/tmp/spdk-nbd.sock 00:23:51.399 11:34:34 event.app_repeat -- common/autotest_common.sh@830 -- # '[' -z 3316894 ']' 00:23:51.399 11:34:34 event.app_repeat -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:23:51.399 11:34:34 event.app_repeat -- common/autotest_common.sh@835 -- # local max_retries=100 00:23:51.399 11:34:34 event.app_repeat -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:23:51.399 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:23:51.399 11:34:34 event.app_repeat -- common/autotest_common.sh@839 -- # xtrace_disable 00:23:51.399 11:34:34 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:23:51.399 11:34:34 event.app_repeat -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:23:51.399 11:34:34 event.app_repeat -- common/autotest_common.sh@863 -- # return 0 00:23:51.399 11:34:34 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:23:51.399 Malloc0 00:23:51.399 11:34:35 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:23:51.399 Malloc1 00:23:51.399 11:34:35 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:23:51.399 11:34:35 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:51.399 11:34:35 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:23:51.399 11:34:35 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:23:51.399 11:34:35 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:23:51.399 11:34:35 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:23:51.399 11:34:35 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:23:51.399 11:34:35 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:51.399 11:34:35 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:23:51.399 11:34:35 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:51.399 11:34:35 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:23:51.399 11:34:35 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:51.399 11:34:35 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:23:51.399 11:34:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:51.399 11:34:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:51.399 11:34:35 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:23:51.657 /dev/nbd0 00:23:51.657 11:34:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:23:51.657 11:34:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:23:51.657 11:34:35 event.app_repeat -- common/autotest_common.sh@867 -- # local nbd_name=nbd0 00:23:51.657 11:34:35 event.app_repeat -- common/autotest_common.sh@868 -- # local i 00:23:51.657 11:34:35 event.app_repeat -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:23:51.657 11:34:35 event.app_repeat -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:23:51.657 11:34:35 event.app_repeat -- common/autotest_common.sh@871 -- # grep -q -w nbd0 /proc/partitions 00:23:51.657 11:34:35 event.app_repeat -- common/autotest_common.sh@872 -- # break 00:23:51.657 11:34:35 event.app_repeat -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:23:51.657 11:34:35 event.app_repeat -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:23:51.657 11:34:35 event.app_repeat -- common/autotest_common.sh@884 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:23:51.657 1+0 records in 00:23:51.657 1+0 records out 00:23:51.657 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000217754 s, 18.8 MB/s 00:23:51.657 11:34:35 event.app_repeat -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:23:51.657 11:34:35 event.app_repeat -- common/autotest_common.sh@885 -- # size=4096 00:23:51.657 11:34:35 event.app_repeat -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:23:51.657 11:34:35 event.app_repeat -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:23:51.657 11:34:35 event.app_repeat -- common/autotest_common.sh@888 -- # return 0 00:23:51.657 11:34:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:51.657 11:34:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:51.657 11:34:35 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:23:51.914 /dev/nbd1 00:23:51.914 11:34:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:23:51.914 11:34:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:23:51.914 11:34:35 event.app_repeat -- common/autotest_common.sh@867 -- # local nbd_name=nbd1 00:23:51.914 11:34:35 event.app_repeat -- common/autotest_common.sh@868 -- # local i 00:23:51.914 11:34:35 event.app_repeat -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:23:51.914 11:34:35 event.app_repeat -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:23:51.914 11:34:35 event.app_repeat -- common/autotest_common.sh@871 -- # grep -q -w nbd1 /proc/partitions 00:23:51.914 11:34:35 event.app_repeat -- common/autotest_common.sh@872 -- # break 00:23:51.914 11:34:35 event.app_repeat -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:23:51.914 11:34:35 event.app_repeat -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:23:51.914 11:34:35 event.app_repeat -- common/autotest_common.sh@884 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:23:51.914 1+0 records in 00:23:51.914 1+0 records out 00:23:51.914 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000257082 s, 15.9 MB/s 00:23:51.914 11:34:35 event.app_repeat -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:23:51.914 11:34:35 event.app_repeat -- common/autotest_common.sh@885 -- # size=4096 00:23:51.914 11:34:35 event.app_repeat -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:23:51.914 11:34:35 event.app_repeat -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:23:51.914 11:34:35 event.app_repeat -- common/autotest_common.sh@888 -- # return 0 00:23:51.914 11:34:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:51.914 11:34:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:51.914 11:34:35 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:23:51.914 11:34:35 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:51.914 11:34:35 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:23:51.914 11:34:35 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:23:51.914 { 00:23:51.914 "nbd_device": "/dev/nbd0", 00:23:51.914 "bdev_name": "Malloc0" 00:23:51.914 }, 00:23:51.914 { 00:23:51.914 "nbd_device": "/dev/nbd1", 00:23:51.914 "bdev_name": "Malloc1" 00:23:51.914 } 00:23:51.914 ]' 00:23:51.914 11:34:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:23:51.914 { 00:23:51.914 "nbd_device": "/dev/nbd0", 00:23:51.914 "bdev_name": "Malloc0" 00:23:51.914 }, 00:23:51.914 { 00:23:51.914 "nbd_device": "/dev/nbd1", 00:23:51.914 "bdev_name": "Malloc1" 00:23:51.914 } 00:23:51.914 ]' 00:23:51.914 11:34:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:23:52.172 11:34:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:23:52.172 /dev/nbd1' 00:23:52.172 11:34:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:23:52.172 /dev/nbd1' 00:23:52.172 11:34:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:23:52.172 11:34:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:23:52.172 11:34:35 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:23:52.172 11:34:35 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:23:52.172 11:34:35 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:23:52.172 11:34:35 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:23:52.172 11:34:35 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:23:52.172 11:34:35 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:23:52.172 11:34:35 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:23:52.172 11:34:35 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:23:52.172 11:34:35 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:23:52.172 11:34:35 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:23:52.172 256+0 records in 00:23:52.172 256+0 records out 00:23:52.172 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0114694 s, 91.4 MB/s 00:23:52.172 11:34:35 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:23:52.172 11:34:35 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:23:52.172 256+0 records in 00:23:52.172 256+0 records out 00:23:52.172 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0207362 s, 50.6 MB/s 00:23:52.173 11:34:35 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:23:52.173 11:34:35 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:23:52.173 256+0 records in 00:23:52.173 256+0 records out 00:23:52.173 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0225447 s, 46.5 MB/s 00:23:52.173 11:34:35 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:23:52.173 11:34:35 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:23:52.173 11:34:35 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:23:52.173 11:34:35 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:23:52.173 11:34:35 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:23:52.173 11:34:35 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:23:52.173 11:34:35 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:23:52.173 11:34:35 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:23:52.173 11:34:35 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:23:52.173 11:34:35 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:23:52.173 11:34:35 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:23:52.173 11:34:35 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:23:52.173 11:34:35 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:23:52.173 11:34:35 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:52.173 11:34:35 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:23:52.173 11:34:35 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:52.173 11:34:35 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:23:52.173 11:34:35 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:52.173 11:34:35 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:23:52.431 11:34:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:52.431 11:34:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:52.431 11:34:36 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:52.431 11:34:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:52.431 11:34:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:52.431 11:34:36 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:52.431 11:34:36 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:23:52.431 11:34:36 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:23:52.431 11:34:36 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:52.431 11:34:36 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:23:52.431 11:34:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:23:52.689 11:34:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:23:52.689 11:34:36 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:23:52.689 11:34:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:52.689 11:34:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:52.689 11:34:36 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:23:52.689 11:34:36 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:23:52.689 11:34:36 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:23:52.689 11:34:36 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:23:52.689 11:34:36 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:52.689 11:34:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:23:52.689 11:34:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:23:52.689 11:34:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:23:52.689 11:34:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:23:52.689 11:34:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:23:52.689 11:34:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:23:52.689 11:34:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:23:52.689 11:34:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:23:52.689 11:34:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:23:52.689 11:34:36 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:23:52.689 11:34:36 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:23:52.689 11:34:36 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:23:52.689 11:34:36 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:23:52.689 11:34:36 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:23:52.948 11:34:36 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:23:53.207 [2024-06-10 11:34:37.002677] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:53.207 [2024-06-10 11:34:37.087318] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:23:53.207 [2024-06-10 11:34:37.087321] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:23:53.207 [2024-06-10 11:34:37.134411] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:23:53.207 [2024-06-10 11:34:37.134461] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:23:56.489 11:34:39 event.app_repeat -- event/event.sh@38 -- # waitforlisten 3316894 /var/tmp/spdk-nbd.sock 00:23:56.489 11:34:39 event.app_repeat -- common/autotest_common.sh@830 -- # '[' -z 3316894 ']' 00:23:56.489 11:34:39 event.app_repeat -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:23:56.489 11:34:39 event.app_repeat -- common/autotest_common.sh@835 -- # local max_retries=100 00:23:56.489 11:34:39 event.app_repeat -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:23:56.489 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:23:56.489 11:34:39 event.app_repeat -- common/autotest_common.sh@839 -- # xtrace_disable 00:23:56.489 11:34:39 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:23:56.489 11:34:39 event.app_repeat -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:23:56.489 11:34:39 event.app_repeat -- common/autotest_common.sh@863 -- # return 0 00:23:56.489 11:34:39 event.app_repeat -- event/event.sh@39 -- # killprocess 3316894 00:23:56.489 11:34:39 event.app_repeat -- common/autotest_common.sh@949 -- # '[' -z 3316894 ']' 00:23:56.489 11:34:39 event.app_repeat -- common/autotest_common.sh@953 -- # kill -0 3316894 00:23:56.489 11:34:39 event.app_repeat -- common/autotest_common.sh@954 -- # uname 00:23:56.489 11:34:39 event.app_repeat -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:23:56.489 11:34:39 event.app_repeat -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3316894 00:23:56.489 11:34:40 event.app_repeat -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:23:56.489 11:34:40 event.app_repeat -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:23:56.489 11:34:40 event.app_repeat -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3316894' 00:23:56.489 killing process with pid 3316894 00:23:56.489 11:34:40 event.app_repeat -- common/autotest_common.sh@968 -- # kill 3316894 00:23:56.489 11:34:40 event.app_repeat -- common/autotest_common.sh@973 -- # wait 3316894 00:23:56.489 spdk_app_start is called in Round 0. 00:23:56.489 Shutdown signal received, stop current app iteration 00:23:56.489 Starting SPDK v24.09-pre git sha1 a3f6419f1 / DPDK 24.03.0 reinitialization... 00:23:56.489 spdk_app_start is called in Round 1. 00:23:56.489 Shutdown signal received, stop current app iteration 00:23:56.489 Starting SPDK v24.09-pre git sha1 a3f6419f1 / DPDK 24.03.0 reinitialization... 00:23:56.490 spdk_app_start is called in Round 2. 00:23:56.490 Shutdown signal received, stop current app iteration 00:23:56.490 Starting SPDK v24.09-pre git sha1 a3f6419f1 / DPDK 24.03.0 reinitialization... 00:23:56.490 spdk_app_start is called in Round 3. 00:23:56.490 Shutdown signal received, stop current app iteration 00:23:56.490 11:34:40 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:23:56.490 11:34:40 event.app_repeat -- event/event.sh@42 -- # return 0 00:23:56.490 00:23:56.490 real 0m16.456s 00:23:56.490 user 0m34.636s 00:23:56.490 sys 0m3.308s 00:23:56.490 11:34:40 event.app_repeat -- common/autotest_common.sh@1125 -- # xtrace_disable 00:23:56.490 11:34:40 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:23:56.490 ************************************ 00:23:56.490 END TEST app_repeat 00:23:56.490 ************************************ 00:23:56.490 11:34:40 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:23:56.490 11:34:40 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/cpu_locks.sh 00:23:56.490 11:34:40 event -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:23:56.490 11:34:40 event -- common/autotest_common.sh@1106 -- # xtrace_disable 00:23:56.490 11:34:40 event -- common/autotest_common.sh@10 -- # set +x 00:23:56.490 ************************************ 00:23:56.490 START TEST cpu_locks 00:23:56.490 ************************************ 00:23:56.490 11:34:40 event.cpu_locks -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/cpu_locks.sh 00:23:56.490 * Looking for test storage... 00:23:56.490 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event 00:23:56.490 11:34:40 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:23:56.490 11:34:40 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:23:56.490 11:34:40 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:23:56.490 11:34:40 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:23:56.490 11:34:40 event.cpu_locks -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:23:56.490 11:34:40 event.cpu_locks -- common/autotest_common.sh@1106 -- # xtrace_disable 00:23:56.490 11:34:40 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:23:56.748 ************************************ 00:23:56.748 START TEST default_locks 00:23:56.748 ************************************ 00:23:56.748 11:34:40 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # default_locks 00:23:56.748 11:34:40 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=3319238 00:23:56.748 11:34:40 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 3319238 00:23:56.748 11:34:40 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:23:56.748 11:34:40 event.cpu_locks.default_locks -- common/autotest_common.sh@830 -- # '[' -z 3319238 ']' 00:23:56.748 11:34:40 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:56.748 11:34:40 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local max_retries=100 00:23:56.748 11:34:40 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:56.748 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:56.748 11:34:40 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # xtrace_disable 00:23:56.748 11:34:40 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:23:56.748 [2024-06-10 11:34:40.479174] Starting SPDK v24.09-pre git sha1 a3f6419f1 / DPDK 24.03.0 initialization... 00:23:56.748 [2024-06-10 11:34:40.479274] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3319238 ] 00:23:56.748 EAL: No free 2048 kB hugepages reported on node 1 00:23:56.748 [2024-06-10 11:34:40.558542] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:56.748 [2024-06-10 11:34:40.641916] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:23:57.682 11:34:41 event.cpu_locks.default_locks -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:23:57.682 11:34:41 event.cpu_locks.default_locks -- common/autotest_common.sh@863 -- # return 0 00:23:57.682 11:34:41 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 3319238 00:23:57.682 11:34:41 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 3319238 00:23:57.682 11:34:41 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:23:58.249 lslocks: write error 00:23:58.249 11:34:41 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 3319238 00:23:58.249 11:34:41 event.cpu_locks.default_locks -- common/autotest_common.sh@949 -- # '[' -z 3319238 ']' 00:23:58.249 11:34:41 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # kill -0 3319238 00:23:58.249 11:34:41 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # uname 00:23:58.249 11:34:41 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:23:58.249 11:34:41 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3319238 00:23:58.249 11:34:41 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:23:58.249 11:34:41 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:23:58.249 11:34:41 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3319238' 00:23:58.249 killing process with pid 3319238 00:23:58.249 11:34:41 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # kill 3319238 00:23:58.249 11:34:41 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # wait 3319238 00:23:58.507 11:34:42 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 3319238 00:23:58.507 11:34:42 event.cpu_locks.default_locks -- common/autotest_common.sh@649 -- # local es=0 00:23:58.507 11:34:42 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # valid_exec_arg waitforlisten 3319238 00:23:58.507 11:34:42 event.cpu_locks.default_locks -- common/autotest_common.sh@637 -- # local arg=waitforlisten 00:23:58.507 11:34:42 event.cpu_locks.default_locks -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:23:58.507 11:34:42 event.cpu_locks.default_locks -- common/autotest_common.sh@641 -- # type -t waitforlisten 00:23:58.507 11:34:42 event.cpu_locks.default_locks -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:23:58.507 11:34:42 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # waitforlisten 3319238 00:23:58.507 11:34:42 event.cpu_locks.default_locks -- common/autotest_common.sh@830 -- # '[' -z 3319238 ']' 00:23:58.507 11:34:42 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:58.507 11:34:42 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local max_retries=100 00:23:58.507 11:34:42 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:58.507 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:58.507 11:34:42 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # xtrace_disable 00:23:58.507 11:34:42 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:23:58.507 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 845: kill: (3319238) - No such process 00:23:58.507 ERROR: process (pid: 3319238) is no longer running 00:23:58.507 11:34:42 event.cpu_locks.default_locks -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:23:58.507 11:34:42 event.cpu_locks.default_locks -- common/autotest_common.sh@863 -- # return 1 00:23:58.507 11:34:42 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # es=1 00:23:58.507 11:34:42 event.cpu_locks.default_locks -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:23:58.507 11:34:42 event.cpu_locks.default_locks -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:23:58.507 11:34:42 event.cpu_locks.default_locks -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:23:58.507 11:34:42 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:23:58.507 11:34:42 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:23:58.507 11:34:42 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:23:58.507 11:34:42 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:23:58.507 00:23:58.507 real 0m1.910s 00:23:58.507 user 0m1.973s 00:23:58.507 sys 0m0.770s 00:23:58.507 11:34:42 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # xtrace_disable 00:23:58.507 11:34:42 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:23:58.507 ************************************ 00:23:58.507 END TEST default_locks 00:23:58.507 ************************************ 00:23:58.507 11:34:42 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:23:58.507 11:34:42 event.cpu_locks -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:23:58.507 11:34:42 event.cpu_locks -- common/autotest_common.sh@1106 -- # xtrace_disable 00:23:58.507 11:34:42 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:23:58.507 ************************************ 00:23:58.507 START TEST default_locks_via_rpc 00:23:58.507 ************************************ 00:23:58.507 11:34:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # default_locks_via_rpc 00:23:58.507 11:34:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=3319555 00:23:58.507 11:34:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 3319555 00:23:58.507 11:34:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:23:58.507 11:34:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@830 -- # '[' -z 3319555 ']' 00:23:58.507 11:34:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:58.507 11:34:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:23:58.507 11:34:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:58.507 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:58.507 11:34:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:23:58.507 11:34:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:23:58.765 [2024-06-10 11:34:42.469375] Starting SPDK v24.09-pre git sha1 a3f6419f1 / DPDK 24.03.0 initialization... 00:23:58.765 [2024-06-10 11:34:42.469445] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3319555 ] 00:23:58.765 EAL: No free 2048 kB hugepages reported on node 1 00:23:58.765 [2024-06-10 11:34:42.550524] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:58.765 [2024-06-10 11:34:42.633838] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:23:59.406 11:34:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:23:59.406 11:34:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@863 -- # return 0 00:23:59.406 11:34:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:23:59.406 11:34:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:59.406 11:34:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:23:59.406 11:34:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:59.406 11:34:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:23:59.406 11:34:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:23:59.406 11:34:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:23:59.406 11:34:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:23:59.406 11:34:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:23:59.406 11:34:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:59.406 11:34:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:23:59.406 11:34:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:59.406 11:34:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 3319555 00:23:59.406 11:34:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 3319555 00:23:59.406 11:34:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:23:59.981 11:34:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 3319555 00:23:59.981 11:34:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@949 -- # '[' -z 3319555 ']' 00:23:59.981 11:34:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # kill -0 3319555 00:23:59.981 11:34:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # uname 00:23:59.981 11:34:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:23:59.981 11:34:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3319555 00:23:59.981 11:34:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:23:59.981 11:34:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:23:59.981 11:34:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3319555' 00:23:59.981 killing process with pid 3319555 00:23:59.981 11:34:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # kill 3319555 00:23:59.981 11:34:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # wait 3319555 00:24:00.240 00:24:00.240 real 0m1.607s 00:24:00.240 user 0m1.661s 00:24:00.240 sys 0m0.546s 00:24:00.240 11:34:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:24:00.240 11:34:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:00.240 ************************************ 00:24:00.240 END TEST default_locks_via_rpc 00:24:00.240 ************************************ 00:24:00.240 11:34:44 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:24:00.240 11:34:44 event.cpu_locks -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:24:00.240 11:34:44 event.cpu_locks -- common/autotest_common.sh@1106 -- # xtrace_disable 00:24:00.240 11:34:44 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:24:00.240 ************************************ 00:24:00.240 START TEST non_locking_app_on_locked_coremask 00:24:00.240 ************************************ 00:24:00.240 11:34:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # non_locking_app_on_locked_coremask 00:24:00.240 11:34:44 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:24:00.240 11:34:44 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=3319823 00:24:00.240 11:34:44 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 3319823 /var/tmp/spdk.sock 00:24:00.240 11:34:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@830 -- # '[' -z 3319823 ']' 00:24:00.240 11:34:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:00.240 11:34:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:24:00.240 11:34:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:00.240 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:00.240 11:34:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:24:00.240 11:34:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:24:00.240 [2024-06-10 11:34:44.138074] Starting SPDK v24.09-pre git sha1 a3f6419f1 / DPDK 24.03.0 initialization... 00:24:00.240 [2024-06-10 11:34:44.138127] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3319823 ] 00:24:00.240 EAL: No free 2048 kB hugepages reported on node 1 00:24:00.500 [2024-06-10 11:34:44.215883] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:00.500 [2024-06-10 11:34:44.312991] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:24:01.067 11:34:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:24:01.067 11:34:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@863 -- # return 0 00:24:01.067 11:34:44 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:24:01.067 11:34:44 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=3319853 00:24:01.067 11:34:44 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 3319853 /var/tmp/spdk2.sock 00:24:01.067 11:34:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@830 -- # '[' -z 3319853 ']' 00:24:01.067 11:34:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk2.sock 00:24:01.067 11:34:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:24:01.067 11:34:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:24:01.067 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:24:01.067 11:34:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:24:01.067 11:34:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:24:01.067 [2024-06-10 11:34:44.990149] Starting SPDK v24.09-pre git sha1 a3f6419f1 / DPDK 24.03.0 initialization... 00:24:01.067 [2024-06-10 11:34:44.990208] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3319853 ] 00:24:01.326 EAL: No free 2048 kB hugepages reported on node 1 00:24:01.326 [2024-06-10 11:34:45.097213] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:24:01.326 [2024-06-10 11:34:45.097261] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:01.585 [2024-06-10 11:34:45.280846] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:24:02.152 11:34:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:24:02.152 11:34:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@863 -- # return 0 00:24:02.152 11:34:45 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 3319823 00:24:02.152 11:34:45 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3319823 00:24:02.152 11:34:45 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:24:03.087 lslocks: write error 00:24:03.088 11:34:46 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 3319823 00:24:03.088 11:34:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@949 -- # '[' -z 3319823 ']' 00:24:03.088 11:34:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # kill -0 3319823 00:24:03.088 11:34:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # uname 00:24:03.088 11:34:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:24:03.088 11:34:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3319823 00:24:03.088 11:34:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:24:03.088 11:34:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:24:03.088 11:34:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3319823' 00:24:03.088 killing process with pid 3319823 00:24:03.088 11:34:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # kill 3319823 00:24:03.088 11:34:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # wait 3319823 00:24:03.655 11:34:47 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 3319853 00:24:03.655 11:34:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@949 -- # '[' -z 3319853 ']' 00:24:03.655 11:34:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # kill -0 3319853 00:24:03.655 11:34:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # uname 00:24:03.655 11:34:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:24:03.655 11:34:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3319853 00:24:03.655 11:34:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:24:03.655 11:34:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:24:03.655 11:34:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3319853' 00:24:03.655 killing process with pid 3319853 00:24:03.655 11:34:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # kill 3319853 00:24:03.655 11:34:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # wait 3319853 00:24:04.222 00:24:04.222 real 0m3.791s 00:24:04.222 user 0m3.983s 00:24:04.222 sys 0m1.203s 00:24:04.222 11:34:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # xtrace_disable 00:24:04.222 11:34:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:24:04.222 ************************************ 00:24:04.222 END TEST non_locking_app_on_locked_coremask 00:24:04.222 ************************************ 00:24:04.222 11:34:47 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:24:04.222 11:34:47 event.cpu_locks -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:24:04.223 11:34:47 event.cpu_locks -- common/autotest_common.sh@1106 -- # xtrace_disable 00:24:04.223 11:34:47 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:24:04.223 ************************************ 00:24:04.223 START TEST locking_app_on_unlocked_coremask 00:24:04.223 ************************************ 00:24:04.223 11:34:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # locking_app_on_unlocked_coremask 00:24:04.223 11:34:47 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=3320386 00:24:04.223 11:34:47 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 3320386 /var/tmp/spdk.sock 00:24:04.223 11:34:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@830 -- # '[' -z 3320386 ']' 00:24:04.223 11:34:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:04.223 11:34:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:24:04.223 11:34:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:04.223 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:04.223 11:34:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:24:04.223 11:34:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:24:04.223 11:34:47 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:24:04.223 [2024-06-10 11:34:48.014410] Starting SPDK v24.09-pre git sha1 a3f6419f1 / DPDK 24.03.0 initialization... 00:24:04.223 [2024-06-10 11:34:48.014472] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3320386 ] 00:24:04.223 EAL: No free 2048 kB hugepages reported on node 1 00:24:04.223 [2024-06-10 11:34:48.094021] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:24:04.223 [2024-06-10 11:34:48.094054] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:04.481 [2024-06-10 11:34:48.189952] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:24:05.048 11:34:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:24:05.048 11:34:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@863 -- # return 0 00:24:05.048 11:34:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=3320404 00:24:05.048 11:34:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 3320404 /var/tmp/spdk2.sock 00:24:05.048 11:34:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@830 -- # '[' -z 3320404 ']' 00:24:05.048 11:34:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk2.sock 00:24:05.048 11:34:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:24:05.048 11:34:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:24:05.048 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:24:05.048 11:34:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:24:05.048 11:34:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:24:05.048 11:34:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:24:05.048 [2024-06-10 11:34:48.860100] Starting SPDK v24.09-pre git sha1 a3f6419f1 / DPDK 24.03.0 initialization... 00:24:05.048 [2024-06-10 11:34:48.860191] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3320404 ] 00:24:05.048 EAL: No free 2048 kB hugepages reported on node 1 00:24:05.048 [2024-06-10 11:34:48.969881] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:05.307 [2024-06-10 11:34:49.155493] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:24:05.873 11:34:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:24:05.873 11:34:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@863 -- # return 0 00:24:05.873 11:34:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 3320404 00:24:05.873 11:34:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3320404 00:24:05.873 11:34:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:24:07.249 lslocks: write error 00:24:07.249 11:34:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 3320386 00:24:07.249 11:34:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@949 -- # '[' -z 3320386 ']' 00:24:07.249 11:34:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # kill -0 3320386 00:24:07.249 11:34:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # uname 00:24:07.249 11:34:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:24:07.249 11:34:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3320386 00:24:07.249 11:34:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:24:07.249 11:34:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:24:07.249 11:34:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3320386' 00:24:07.249 killing process with pid 3320386 00:24:07.249 11:34:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # kill 3320386 00:24:07.249 11:34:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # wait 3320386 00:24:07.816 11:34:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 3320404 00:24:07.816 11:34:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@949 -- # '[' -z 3320404 ']' 00:24:07.816 11:34:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # kill -0 3320404 00:24:07.816 11:34:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # uname 00:24:07.816 11:34:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:24:07.816 11:34:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3320404 00:24:07.816 11:34:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:24:07.816 11:34:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:24:07.816 11:34:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3320404' 00:24:07.816 killing process with pid 3320404 00:24:07.816 11:34:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # kill 3320404 00:24:07.816 11:34:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # wait 3320404 00:24:08.382 00:24:08.382 real 0m4.074s 00:24:08.382 user 0m4.254s 00:24:08.383 sys 0m1.423s 00:24:08.383 11:34:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # xtrace_disable 00:24:08.383 11:34:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:24:08.383 ************************************ 00:24:08.383 END TEST locking_app_on_unlocked_coremask 00:24:08.383 ************************************ 00:24:08.383 11:34:52 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:24:08.383 11:34:52 event.cpu_locks -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:24:08.383 11:34:52 event.cpu_locks -- common/autotest_common.sh@1106 -- # xtrace_disable 00:24:08.383 11:34:52 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:24:08.383 ************************************ 00:24:08.383 START TEST locking_app_on_locked_coremask 00:24:08.383 ************************************ 00:24:08.383 11:34:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # locking_app_on_locked_coremask 00:24:08.383 11:34:52 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=3320954 00:24:08.383 11:34:52 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 3320954 /var/tmp/spdk.sock 00:24:08.383 11:34:52 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:24:08.383 11:34:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@830 -- # '[' -z 3320954 ']' 00:24:08.383 11:34:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:08.383 11:34:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:24:08.383 11:34:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:08.383 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:08.383 11:34:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:24:08.383 11:34:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:24:08.383 [2024-06-10 11:34:52.172445] Starting SPDK v24.09-pre git sha1 a3f6419f1 / DPDK 24.03.0 initialization... 00:24:08.383 [2024-06-10 11:34:52.172532] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3320954 ] 00:24:08.383 EAL: No free 2048 kB hugepages reported on node 1 00:24:08.383 [2024-06-10 11:34:52.252815] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:08.640 [2024-06-10 11:34:52.342894] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:24:09.208 11:34:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:24:09.208 11:34:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@863 -- # return 0 00:24:09.208 11:34:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:24:09.208 11:34:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=3320973 00:24:09.208 11:34:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 3320973 /var/tmp/spdk2.sock 00:24:09.208 11:34:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@649 -- # local es=0 00:24:09.208 11:34:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # valid_exec_arg waitforlisten 3320973 /var/tmp/spdk2.sock 00:24:09.208 11:34:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@637 -- # local arg=waitforlisten 00:24:09.208 11:34:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:24:09.208 11:34:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@641 -- # type -t waitforlisten 00:24:09.208 11:34:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:24:09.208 11:34:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # waitforlisten 3320973 /var/tmp/spdk2.sock 00:24:09.208 11:34:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@830 -- # '[' -z 3320973 ']' 00:24:09.208 11:34:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk2.sock 00:24:09.208 11:34:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:24:09.208 11:34:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:24:09.208 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:24:09.208 11:34:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:24:09.208 11:34:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:24:09.208 [2024-06-10 11:34:53.060200] Starting SPDK v24.09-pre git sha1 a3f6419f1 / DPDK 24.03.0 initialization... 00:24:09.208 [2024-06-10 11:34:53.060300] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3320973 ] 00:24:09.208 EAL: No free 2048 kB hugepages reported on node 1 00:24:09.466 [2024-06-10 11:34:53.171940] app.c: 772:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 3320954 has claimed it. 00:24:09.466 [2024-06-10 11:34:53.171983] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:24:10.033 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 845: kill: (3320973) - No such process 00:24:10.033 ERROR: process (pid: 3320973) is no longer running 00:24:10.033 11:34:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:24:10.033 11:34:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@863 -- # return 1 00:24:10.033 11:34:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # es=1 00:24:10.033 11:34:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:24:10.033 11:34:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:24:10.033 11:34:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:24:10.033 11:34:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 3320954 00:24:10.033 11:34:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3320954 00:24:10.033 11:34:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:24:10.600 lslocks: write error 00:24:10.600 11:34:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 3320954 00:24:10.600 11:34:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@949 -- # '[' -z 3320954 ']' 00:24:10.600 11:34:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # kill -0 3320954 00:24:10.600 11:34:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # uname 00:24:10.600 11:34:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:24:10.600 11:34:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3320954 00:24:10.600 11:34:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:24:10.600 11:34:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:24:10.600 11:34:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3320954' 00:24:10.600 killing process with pid 3320954 00:24:10.600 11:34:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # kill 3320954 00:24:10.600 11:34:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # wait 3320954 00:24:10.859 00:24:10.859 real 0m2.644s 00:24:10.859 user 0m2.892s 00:24:10.859 sys 0m0.847s 00:24:10.859 11:34:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # xtrace_disable 00:24:10.859 11:34:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:24:10.859 ************************************ 00:24:10.859 END TEST locking_app_on_locked_coremask 00:24:10.859 ************************************ 00:24:11.119 11:34:54 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:24:11.119 11:34:54 event.cpu_locks -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:24:11.119 11:34:54 event.cpu_locks -- common/autotest_common.sh@1106 -- # xtrace_disable 00:24:11.119 11:34:54 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:24:11.119 ************************************ 00:24:11.119 START TEST locking_overlapped_coremask 00:24:11.119 ************************************ 00:24:11.119 11:34:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # locking_overlapped_coremask 00:24:11.119 11:34:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=3321342 00:24:11.119 11:34:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 3321342 /var/tmp/spdk.sock 00:24:11.119 11:34:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@830 -- # '[' -z 3321342 ']' 00:24:11.119 11:34:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:11.119 11:34:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:24:11.119 11:34:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:11.119 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:11.119 11:34:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:24:11.119 11:34:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:24:11.119 11:34:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:24:11.119 [2024-06-10 11:34:54.884119] Starting SPDK v24.09-pre git sha1 a3f6419f1 / DPDK 24.03.0 initialization... 00:24:11.119 [2024-06-10 11:34:54.884179] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3321342 ] 00:24:11.119 EAL: No free 2048 kB hugepages reported on node 1 00:24:11.119 [2024-06-10 11:34:54.961127] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:11.119 [2024-06-10 11:34:55.056779] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:24:11.119 [2024-06-10 11:34:55.056868] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:24:11.119 [2024-06-10 11:34:55.056869] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:24:12.057 11:34:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:24:12.057 11:34:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@863 -- # return 0 00:24:12.057 11:34:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=3321361 00:24:12.057 11:34:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 3321361 /var/tmp/spdk2.sock 00:24:12.057 11:34:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:24:12.057 11:34:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@649 -- # local es=0 00:24:12.057 11:34:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # valid_exec_arg waitforlisten 3321361 /var/tmp/spdk2.sock 00:24:12.057 11:34:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@637 -- # local arg=waitforlisten 00:24:12.057 11:34:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:24:12.057 11:34:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@641 -- # type -t waitforlisten 00:24:12.057 11:34:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:24:12.057 11:34:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # waitforlisten 3321361 /var/tmp/spdk2.sock 00:24:12.057 11:34:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@830 -- # '[' -z 3321361 ']' 00:24:12.057 11:34:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk2.sock 00:24:12.057 11:34:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:24:12.057 11:34:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:24:12.057 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:24:12.057 11:34:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:24:12.057 11:34:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:24:12.057 [2024-06-10 11:34:55.742760] Starting SPDK v24.09-pre git sha1 a3f6419f1 / DPDK 24.03.0 initialization... 00:24:12.057 [2024-06-10 11:34:55.742845] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3321361 ] 00:24:12.057 EAL: No free 2048 kB hugepages reported on node 1 00:24:12.057 [2024-06-10 11:34:55.853356] app.c: 772:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3321342 has claimed it. 00:24:12.057 [2024-06-10 11:34:55.853395] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:24:12.625 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 845: kill: (3321361) - No such process 00:24:12.625 ERROR: process (pid: 3321361) is no longer running 00:24:12.625 11:34:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:24:12.625 11:34:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@863 -- # return 1 00:24:12.625 11:34:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # es=1 00:24:12.625 11:34:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:24:12.625 11:34:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:24:12.625 11:34:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:24:12.625 11:34:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:24:12.625 11:34:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:24:12.625 11:34:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:24:12.625 11:34:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:24:12.625 11:34:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 3321342 00:24:12.625 11:34:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@949 -- # '[' -z 3321342 ']' 00:24:12.625 11:34:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # kill -0 3321342 00:24:12.625 11:34:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # uname 00:24:12.625 11:34:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:24:12.625 11:34:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3321342 00:24:12.625 11:34:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:24:12.625 11:34:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:24:12.625 11:34:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3321342' 00:24:12.625 killing process with pid 3321342 00:24:12.625 11:34:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # kill 3321342 00:24:12.625 11:34:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # wait 3321342 00:24:12.884 00:24:12.884 real 0m1.949s 00:24:12.884 user 0m5.419s 00:24:12.884 sys 0m0.471s 00:24:12.884 11:34:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # xtrace_disable 00:24:12.884 11:34:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:24:12.884 ************************************ 00:24:12.884 END TEST locking_overlapped_coremask 00:24:12.884 ************************************ 00:24:13.145 11:34:56 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:24:13.145 11:34:56 event.cpu_locks -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:24:13.145 11:34:56 event.cpu_locks -- common/autotest_common.sh@1106 -- # xtrace_disable 00:24:13.145 11:34:56 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:24:13.145 ************************************ 00:24:13.145 START TEST locking_overlapped_coremask_via_rpc 00:24:13.145 ************************************ 00:24:13.145 11:34:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # locking_overlapped_coremask_via_rpc 00:24:13.145 11:34:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=3321563 00:24:13.145 11:34:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 3321563 /var/tmp/spdk.sock 00:24:13.145 11:34:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:24:13.145 11:34:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@830 -- # '[' -z 3321563 ']' 00:24:13.145 11:34:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:13.145 11:34:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:24:13.145 11:34:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:13.145 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:13.145 11:34:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:24:13.145 11:34:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:13.145 [2024-06-10 11:34:56.917410] Starting SPDK v24.09-pre git sha1 a3f6419f1 / DPDK 24.03.0 initialization... 00:24:13.145 [2024-06-10 11:34:56.917477] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3321563 ] 00:24:13.145 EAL: No free 2048 kB hugepages reported on node 1 00:24:13.145 [2024-06-10 11:34:56.996085] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:24:13.145 [2024-06-10 11:34:56.996114] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:13.145 [2024-06-10 11:34:57.087920] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:24:13.145 [2024-06-10 11:34:57.088010] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:24:13.145 [2024-06-10 11:34:57.088012] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:24:14.081 11:34:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:24:14.081 11:34:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@863 -- # return 0 00:24:14.081 11:34:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=3321741 00:24:14.081 11:34:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 3321741 /var/tmp/spdk2.sock 00:24:14.081 11:34:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:24:14.081 11:34:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@830 -- # '[' -z 3321741 ']' 00:24:14.081 11:34:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk2.sock 00:24:14.081 11:34:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:24:14.081 11:34:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:24:14.081 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:24:14.081 11:34:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:24:14.081 11:34:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:14.081 [2024-06-10 11:34:57.771689] Starting SPDK v24.09-pre git sha1 a3f6419f1 / DPDK 24.03.0 initialization... 00:24:14.081 [2024-06-10 11:34:57.771757] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3321741 ] 00:24:14.081 EAL: No free 2048 kB hugepages reported on node 1 00:24:14.081 [2024-06-10 11:34:57.880056] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:24:14.081 [2024-06-10 11:34:57.880088] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:14.340 [2024-06-10 11:34:58.056317] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:24:14.340 [2024-06-10 11:34:58.056424] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:24:14.340 [2024-06-10 11:34:58.056424] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 4 00:24:14.909 11:34:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:24:14.909 11:34:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@863 -- # return 0 00:24:14.909 11:34:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:24:14.909 11:34:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:14.909 11:34:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:14.909 11:34:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:14.909 11:34:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:24:14.909 11:34:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@649 -- # local es=0 00:24:14.909 11:34:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:24:14.909 11:34:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:24:14.909 11:34:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:24:14.909 11:34:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:24:14.909 11:34:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:24:14.909 11:34:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:24:14.909 11:34:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:14.909 11:34:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:14.909 [2024-06-10 11:34:58.624309] app.c: 772:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3321563 has claimed it. 00:24:14.909 request: 00:24:14.909 { 00:24:14.909 "method": "framework_enable_cpumask_locks", 00:24:14.909 "req_id": 1 00:24:14.909 } 00:24:14.909 Got JSON-RPC error response 00:24:14.909 response: 00:24:14.909 { 00:24:14.909 "code": -32603, 00:24:14.909 "message": "Failed to claim CPU core: 2" 00:24:14.909 } 00:24:14.909 11:34:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:24:14.909 11:34:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # es=1 00:24:14.909 11:34:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:24:14.909 11:34:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:24:14.909 11:34:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:24:14.909 11:34:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 3321563 /var/tmp/spdk.sock 00:24:14.909 11:34:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@830 -- # '[' -z 3321563 ']' 00:24:14.909 11:34:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:14.909 11:34:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:24:14.909 11:34:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:14.909 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:14.909 11:34:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:24:14.909 11:34:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:14.909 11:34:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:24:14.909 11:34:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@863 -- # return 0 00:24:14.909 11:34:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 3321741 /var/tmp/spdk2.sock 00:24:14.909 11:34:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@830 -- # '[' -z 3321741 ']' 00:24:14.909 11:34:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk2.sock 00:24:14.909 11:34:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:24:14.909 11:34:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:24:14.909 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:24:14.909 11:34:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:24:14.909 11:34:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:15.168 11:34:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:24:15.168 11:34:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@863 -- # return 0 00:24:15.168 11:34:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:24:15.168 11:34:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:24:15.168 11:34:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:24:15.168 11:34:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:24:15.168 00:24:15.168 real 0m2.125s 00:24:15.168 user 0m0.830s 00:24:15.168 sys 0m0.222s 00:24:15.168 11:34:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:24:15.168 11:34:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:15.168 ************************************ 00:24:15.168 END TEST locking_overlapped_coremask_via_rpc 00:24:15.168 ************************************ 00:24:15.168 11:34:59 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:24:15.168 11:34:59 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3321563 ]] 00:24:15.168 11:34:59 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3321563 00:24:15.168 11:34:59 event.cpu_locks -- common/autotest_common.sh@949 -- # '[' -z 3321563 ']' 00:24:15.168 11:34:59 event.cpu_locks -- common/autotest_common.sh@953 -- # kill -0 3321563 00:24:15.168 11:34:59 event.cpu_locks -- common/autotest_common.sh@954 -- # uname 00:24:15.168 11:34:59 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:24:15.168 11:34:59 event.cpu_locks -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3321563 00:24:15.168 11:34:59 event.cpu_locks -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:24:15.168 11:34:59 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:24:15.168 11:34:59 event.cpu_locks -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3321563' 00:24:15.168 killing process with pid 3321563 00:24:15.168 11:34:59 event.cpu_locks -- common/autotest_common.sh@968 -- # kill 3321563 00:24:15.168 11:34:59 event.cpu_locks -- common/autotest_common.sh@973 -- # wait 3321563 00:24:15.737 11:34:59 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3321741 ]] 00:24:15.737 11:34:59 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3321741 00:24:15.737 11:34:59 event.cpu_locks -- common/autotest_common.sh@949 -- # '[' -z 3321741 ']' 00:24:15.737 11:34:59 event.cpu_locks -- common/autotest_common.sh@953 -- # kill -0 3321741 00:24:15.737 11:34:59 event.cpu_locks -- common/autotest_common.sh@954 -- # uname 00:24:15.737 11:34:59 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:24:15.737 11:34:59 event.cpu_locks -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3321741 00:24:15.737 11:34:59 event.cpu_locks -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:24:15.737 11:34:59 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:24:15.737 11:34:59 event.cpu_locks -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3321741' 00:24:15.737 killing process with pid 3321741 00:24:15.737 11:34:59 event.cpu_locks -- common/autotest_common.sh@968 -- # kill 3321741 00:24:15.737 11:34:59 event.cpu_locks -- common/autotest_common.sh@973 -- # wait 3321741 00:24:15.997 11:34:59 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:24:15.997 11:34:59 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:24:15.997 11:34:59 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3321563 ]] 00:24:15.997 11:34:59 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3321563 00:24:15.997 11:34:59 event.cpu_locks -- common/autotest_common.sh@949 -- # '[' -z 3321563 ']' 00:24:15.997 11:34:59 event.cpu_locks -- common/autotest_common.sh@953 -- # kill -0 3321563 00:24:15.997 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 953: kill: (3321563) - No such process 00:24:15.997 11:34:59 event.cpu_locks -- common/autotest_common.sh@976 -- # echo 'Process with pid 3321563 is not found' 00:24:15.997 Process with pid 3321563 is not found 00:24:15.997 11:34:59 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3321741 ]] 00:24:15.997 11:34:59 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3321741 00:24:15.997 11:34:59 event.cpu_locks -- common/autotest_common.sh@949 -- # '[' -z 3321741 ']' 00:24:15.997 11:34:59 event.cpu_locks -- common/autotest_common.sh@953 -- # kill -0 3321741 00:24:15.997 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 953: kill: (3321741) - No such process 00:24:15.997 11:34:59 event.cpu_locks -- common/autotest_common.sh@976 -- # echo 'Process with pid 3321741 is not found' 00:24:15.997 Process with pid 3321741 is not found 00:24:15.997 11:34:59 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:24:15.997 00:24:15.997 real 0m19.562s 00:24:15.997 user 0m31.824s 00:24:15.997 sys 0m6.544s 00:24:15.997 11:34:59 event.cpu_locks -- common/autotest_common.sh@1125 -- # xtrace_disable 00:24:15.997 11:34:59 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:24:15.997 ************************************ 00:24:15.997 END TEST cpu_locks 00:24:15.997 ************************************ 00:24:15.997 00:24:15.997 real 0m45.453s 00:24:15.997 user 1m23.642s 00:24:15.997 sys 0m11.032s 00:24:15.997 11:34:59 event -- common/autotest_common.sh@1125 -- # xtrace_disable 00:24:15.997 11:34:59 event -- common/autotest_common.sh@10 -- # set +x 00:24:15.997 ************************************ 00:24:15.997 END TEST event 00:24:15.997 ************************************ 00:24:15.997 11:34:59 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/thread.sh 00:24:15.997 11:34:59 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:24:15.997 11:34:59 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:24:15.997 11:34:59 -- common/autotest_common.sh@10 -- # set +x 00:24:16.256 ************************************ 00:24:16.256 START TEST thread 00:24:16.256 ************************************ 00:24:16.256 11:34:59 thread -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/thread.sh 00:24:16.256 * Looking for test storage... 00:24:16.256 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread 00:24:16.256 11:35:00 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:24:16.256 11:35:00 thread -- common/autotest_common.sh@1100 -- # '[' 8 -le 1 ']' 00:24:16.256 11:35:00 thread -- common/autotest_common.sh@1106 -- # xtrace_disable 00:24:16.256 11:35:00 thread -- common/autotest_common.sh@10 -- # set +x 00:24:16.256 ************************************ 00:24:16.256 START TEST thread_poller_perf 00:24:16.256 ************************************ 00:24:16.256 11:35:00 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:24:16.256 [2024-06-10 11:35:00.124359] Starting SPDK v24.09-pre git sha1 a3f6419f1 / DPDK 24.03.0 initialization... 00:24:16.256 [2024-06-10 11:35:00.124431] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3322154 ] 00:24:16.256 EAL: No free 2048 kB hugepages reported on node 1 00:24:16.514 [2024-06-10 11:35:00.206188] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:16.514 [2024-06-10 11:35:00.290579] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:24:16.514 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:24:17.450 ====================================== 00:24:17.450 busy:2303901014 (cyc) 00:24:17.450 total_run_count: 849000 00:24:17.450 tsc_hz: 2300000000 (cyc) 00:24:17.450 ====================================== 00:24:17.450 poller_cost: 2713 (cyc), 1179 (nsec) 00:24:17.450 00:24:17.450 real 0m1.265s 00:24:17.450 user 0m1.166s 00:24:17.450 sys 0m0.095s 00:24:17.450 11:35:01 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # xtrace_disable 00:24:17.450 11:35:01 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:24:17.451 ************************************ 00:24:17.451 END TEST thread_poller_perf 00:24:17.451 ************************************ 00:24:17.709 11:35:01 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:24:17.709 11:35:01 thread -- common/autotest_common.sh@1100 -- # '[' 8 -le 1 ']' 00:24:17.709 11:35:01 thread -- common/autotest_common.sh@1106 -- # xtrace_disable 00:24:17.709 11:35:01 thread -- common/autotest_common.sh@10 -- # set +x 00:24:17.709 ************************************ 00:24:17.709 START TEST thread_poller_perf 00:24:17.709 ************************************ 00:24:17.709 11:35:01 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:24:17.709 [2024-06-10 11:35:01.455876] Starting SPDK v24.09-pre git sha1 a3f6419f1 / DPDK 24.03.0 initialization... 00:24:17.709 [2024-06-10 11:35:01.455960] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3322334 ] 00:24:17.709 EAL: No free 2048 kB hugepages reported on node 1 00:24:17.709 [2024-06-10 11:35:01.535500] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:17.709 [2024-06-10 11:35:01.623018] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:24:17.709 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:24:19.086 ====================================== 00:24:19.086 busy:2301426246 (cyc) 00:24:19.086 total_run_count: 13650000 00:24:19.086 tsc_hz: 2300000000 (cyc) 00:24:19.086 ====================================== 00:24:19.086 poller_cost: 168 (cyc), 73 (nsec) 00:24:19.086 00:24:19.086 real 0m1.264s 00:24:19.086 user 0m1.148s 00:24:19.086 sys 0m0.111s 00:24:19.086 11:35:02 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # xtrace_disable 00:24:19.086 11:35:02 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:24:19.086 ************************************ 00:24:19.086 END TEST thread_poller_perf 00:24:19.086 ************************************ 00:24:19.086 11:35:02 thread -- thread/thread.sh@17 -- # [[ n != \y ]] 00:24:19.086 11:35:02 thread -- thread/thread.sh@18 -- # run_test thread_spdk_lock /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/lock/spdk_lock 00:24:19.087 11:35:02 thread -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:24:19.087 11:35:02 thread -- common/autotest_common.sh@1106 -- # xtrace_disable 00:24:19.087 11:35:02 thread -- common/autotest_common.sh@10 -- # set +x 00:24:19.087 ************************************ 00:24:19.087 START TEST thread_spdk_lock 00:24:19.087 ************************************ 00:24:19.087 11:35:02 thread.thread_spdk_lock -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/lock/spdk_lock 00:24:19.087 [2024-06-10 11:35:02.791636] Starting SPDK v24.09-pre git sha1 a3f6419f1 / DPDK 24.03.0 initialization... 00:24:19.087 [2024-06-10 11:35:02.791689] [ DPDK EAL parameters: spdk_lock_test --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3322519 ] 00:24:19.087 EAL: No free 2048 kB hugepages reported on node 1 00:24:19.087 [2024-06-10 11:35:02.863513] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:19.087 [2024-06-10 11:35:02.952013] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:24:19.087 [2024-06-10 11:35:02.952016] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:24:19.655 [2024-06-10 11:35:03.449225] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c: 961:thread_execute_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:24:19.655 [2024-06-10 11:35:03.449265] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:3072:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 2: Deadlock detected (thread != sspin->thread) 00:24:19.655 [2024-06-10 11:35:03.449292] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:3027:sspin_stacks_print: *ERROR*: spinlock 0x14cb540 00:24:19.655 [2024-06-10 11:35:03.450189] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c: 856:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:24:19.655 [2024-06-10 11:35:03.450295] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:1022:thread_execute_timed_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:24:19.655 [2024-06-10 11:35:03.450314] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c: 856:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:24:19.655 Starting test contend 00:24:19.655 Worker Delay Wait us Hold us Total us 00:24:19.655 0 3 175487 189609 365096 00:24:19.655 1 5 97107 287224 384331 00:24:19.655 PASS test contend 00:24:19.655 Starting test hold_by_poller 00:24:19.655 PASS test hold_by_poller 00:24:19.655 Starting test hold_by_message 00:24:19.655 PASS test hold_by_message 00:24:19.655 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/lock/spdk_lock summary: 00:24:19.655 100014 assertions passed 00:24:19.655 0 assertions failed 00:24:19.655 00:24:19.655 real 0m0.746s 00:24:19.655 user 0m1.150s 00:24:19.655 sys 0m0.090s 00:24:19.655 11:35:03 thread.thread_spdk_lock -- common/autotest_common.sh@1125 -- # xtrace_disable 00:24:19.655 11:35:03 thread.thread_spdk_lock -- common/autotest_common.sh@10 -- # set +x 00:24:19.655 ************************************ 00:24:19.655 END TEST thread_spdk_lock 00:24:19.655 ************************************ 00:24:19.655 00:24:19.655 real 0m3.587s 00:24:19.655 user 0m3.574s 00:24:19.655 sys 0m0.525s 00:24:19.655 11:35:03 thread -- common/autotest_common.sh@1125 -- # xtrace_disable 00:24:19.655 11:35:03 thread -- common/autotest_common.sh@10 -- # set +x 00:24:19.655 ************************************ 00:24:19.655 END TEST thread 00:24:19.655 ************************************ 00:24:19.655 11:35:03 -- spdk/autotest.sh@183 -- # run_test accel /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/accel.sh 00:24:19.655 11:35:03 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:24:19.914 11:35:03 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:24:19.914 11:35:03 -- common/autotest_common.sh@10 -- # set +x 00:24:19.914 ************************************ 00:24:19.914 START TEST accel 00:24:19.914 ************************************ 00:24:19.914 11:35:03 accel -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/accel.sh 00:24:19.914 * Looking for test storage... 00:24:19.914 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel 00:24:19.914 11:35:03 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:24:19.914 11:35:03 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:24:19.914 11:35:03 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:24:19.914 11:35:03 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=3322655 00:24:19.914 11:35:03 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:24:19.914 11:35:03 accel -- accel/accel.sh@63 -- # waitforlisten 3322655 00:24:19.914 11:35:03 accel -- accel/accel.sh@61 -- # build_accel_config 00:24:19.914 11:35:03 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:24:19.914 11:35:03 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:24:19.914 11:35:03 accel -- common/autotest_common.sh@830 -- # '[' -z 3322655 ']' 00:24:19.914 11:35:03 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:24:19.914 11:35:03 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:24:19.914 11:35:03 accel -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:19.914 11:35:03 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:24:19.914 11:35:03 accel -- accel/accel.sh@40 -- # local IFS=, 00:24:19.914 11:35:03 accel -- accel/accel.sh@41 -- # jq -r . 00:24:19.914 11:35:03 accel -- common/autotest_common.sh@835 -- # local max_retries=100 00:24:19.914 11:35:03 accel -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:19.914 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:19.914 11:35:03 accel -- common/autotest_common.sh@839 -- # xtrace_disable 00:24:19.914 11:35:03 accel -- common/autotest_common.sh@10 -- # set +x 00:24:19.914 [2024-06-10 11:35:03.763700] Starting SPDK v24.09-pre git sha1 a3f6419f1 / DPDK 24.03.0 initialization... 00:24:19.914 [2024-06-10 11:35:03.763753] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3322655 ] 00:24:19.914 EAL: No free 2048 kB hugepages reported on node 1 00:24:19.914 [2024-06-10 11:35:03.842088] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:20.173 [2024-06-10 11:35:03.938157] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:24:20.742 11:35:04 accel -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:24:20.742 11:35:04 accel -- common/autotest_common.sh@863 -- # return 0 00:24:20.742 11:35:04 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:24:20.742 11:35:04 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:24:20.742 11:35:04 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:24:20.742 11:35:04 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:24:20.742 11:35:04 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:24:20.742 11:35:04 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:24:20.742 11:35:04 accel -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:20.742 11:35:04 accel -- common/autotest_common.sh@10 -- # set +x 00:24:20.742 11:35:04 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:24:20.742 11:35:04 accel -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:20.742 11:35:04 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:24:20.742 11:35:04 accel -- accel/accel.sh@72 -- # IFS== 00:24:20.742 11:35:04 accel -- accel/accel.sh@72 -- # read -r opc module 00:24:20.742 11:35:04 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:24:20.742 11:35:04 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:24:20.742 11:35:04 accel -- accel/accel.sh@72 -- # IFS== 00:24:20.742 11:35:04 accel -- accel/accel.sh@72 -- # read -r opc module 00:24:20.742 11:35:04 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:24:20.743 11:35:04 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:24:20.743 11:35:04 accel -- accel/accel.sh@72 -- # IFS== 00:24:20.743 11:35:04 accel -- accel/accel.sh@72 -- # read -r opc module 00:24:20.743 11:35:04 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:24:20.743 11:35:04 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:24:20.743 11:35:04 accel -- accel/accel.sh@72 -- # IFS== 00:24:20.743 11:35:04 accel -- accel/accel.sh@72 -- # read -r opc module 00:24:20.743 11:35:04 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:24:20.743 11:35:04 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:24:20.743 11:35:04 accel -- accel/accel.sh@72 -- # IFS== 00:24:20.743 11:35:04 accel -- accel/accel.sh@72 -- # read -r opc module 00:24:20.743 11:35:04 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:24:20.743 11:35:04 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:24:20.743 11:35:04 accel -- accel/accel.sh@72 -- # IFS== 00:24:20.743 11:35:04 accel -- accel/accel.sh@72 -- # read -r opc module 00:24:20.743 11:35:04 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:24:20.743 11:35:04 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:24:20.743 11:35:04 accel -- accel/accel.sh@72 -- # IFS== 00:24:20.743 11:35:04 accel -- accel/accel.sh@72 -- # read -r opc module 00:24:20.743 11:35:04 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:24:20.743 11:35:04 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:24:20.743 11:35:04 accel -- accel/accel.sh@72 -- # IFS== 00:24:20.743 11:35:04 accel -- accel/accel.sh@72 -- # read -r opc module 00:24:20.743 11:35:04 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:24:20.743 11:35:04 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:24:20.743 11:35:04 accel -- accel/accel.sh@72 -- # IFS== 00:24:20.743 11:35:04 accel -- accel/accel.sh@72 -- # read -r opc module 00:24:20.743 11:35:04 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:24:20.743 11:35:04 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:24:20.743 11:35:04 accel -- accel/accel.sh@72 -- # IFS== 00:24:20.743 11:35:04 accel -- accel/accel.sh@72 -- # read -r opc module 00:24:20.743 11:35:04 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:24:20.743 11:35:04 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:24:20.743 11:35:04 accel -- accel/accel.sh@72 -- # IFS== 00:24:20.743 11:35:04 accel -- accel/accel.sh@72 -- # read -r opc module 00:24:20.743 11:35:04 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:24:20.743 11:35:04 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:24:20.743 11:35:04 accel -- accel/accel.sh@72 -- # IFS== 00:24:20.743 11:35:04 accel -- accel/accel.sh@72 -- # read -r opc module 00:24:20.743 11:35:04 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:24:20.743 11:35:04 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:24:20.743 11:35:04 accel -- accel/accel.sh@72 -- # IFS== 00:24:20.743 11:35:04 accel -- accel/accel.sh@72 -- # read -r opc module 00:24:20.743 11:35:04 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:24:20.743 11:35:04 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:24:20.743 11:35:04 accel -- accel/accel.sh@72 -- # IFS== 00:24:20.743 11:35:04 accel -- accel/accel.sh@72 -- # read -r opc module 00:24:20.743 11:35:04 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:24:20.743 11:35:04 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:24:20.743 11:35:04 accel -- accel/accel.sh@72 -- # IFS== 00:24:20.743 11:35:04 accel -- accel/accel.sh@72 -- # read -r opc module 00:24:20.743 11:35:04 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:24:20.743 11:35:04 accel -- accel/accel.sh@75 -- # killprocess 3322655 00:24:20.743 11:35:04 accel -- common/autotest_common.sh@949 -- # '[' -z 3322655 ']' 00:24:20.743 11:35:04 accel -- common/autotest_common.sh@953 -- # kill -0 3322655 00:24:20.743 11:35:04 accel -- common/autotest_common.sh@954 -- # uname 00:24:20.743 11:35:04 accel -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:24:20.743 11:35:04 accel -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3322655 00:24:21.002 11:35:04 accel -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:24:21.002 11:35:04 accel -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:24:21.002 11:35:04 accel -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3322655' 00:24:21.002 killing process with pid 3322655 00:24:21.002 11:35:04 accel -- common/autotest_common.sh@968 -- # kill 3322655 00:24:21.002 11:35:04 accel -- common/autotest_common.sh@973 -- # wait 3322655 00:24:21.261 11:35:05 accel -- accel/accel.sh@76 -- # trap - ERR 00:24:21.261 11:35:05 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:24:21.261 11:35:05 accel -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:24:21.261 11:35:05 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:24:21.261 11:35:05 accel -- common/autotest_common.sh@10 -- # set +x 00:24:21.261 11:35:05 accel.accel_help -- common/autotest_common.sh@1124 -- # accel_perf -h 00:24:21.261 11:35:05 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:24:21.261 11:35:05 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:24:21.261 11:35:05 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:24:21.261 11:35:05 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:24:21.261 11:35:05 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:24:21.261 11:35:05 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:24:21.261 11:35:05 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:24:21.261 11:35:05 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:24:21.261 11:35:05 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:24:21.261 11:35:05 accel.accel_help -- common/autotest_common.sh@1125 -- # xtrace_disable 00:24:21.261 11:35:05 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:24:21.261 11:35:05 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:24:21.261 11:35:05 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:24:21.261 11:35:05 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:24:21.261 11:35:05 accel -- common/autotest_common.sh@10 -- # set +x 00:24:21.261 ************************************ 00:24:21.261 START TEST accel_missing_filename 00:24:21.261 ************************************ 00:24:21.261 11:35:05 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # NOT accel_perf -t 1 -w compress 00:24:21.261 11:35:05 accel.accel_missing_filename -- common/autotest_common.sh@649 -- # local es=0 00:24:21.261 11:35:05 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # valid_exec_arg accel_perf -t 1 -w compress 00:24:21.261 11:35:05 accel.accel_missing_filename -- common/autotest_common.sh@637 -- # local arg=accel_perf 00:24:21.261 11:35:05 accel.accel_missing_filename -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:24:21.261 11:35:05 accel.accel_missing_filename -- common/autotest_common.sh@641 -- # type -t accel_perf 00:24:21.261 11:35:05 accel.accel_missing_filename -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:24:21.261 11:35:05 accel.accel_missing_filename -- common/autotest_common.sh@652 -- # accel_perf -t 1 -w compress 00:24:21.261 11:35:05 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:24:21.261 11:35:05 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:24:21.261 11:35:05 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:24:21.261 11:35:05 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:24:21.261 11:35:05 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:24:21.261 11:35:05 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:24:21.261 11:35:05 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:24:21.261 11:35:05 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:24:21.261 11:35:05 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:24:21.261 [2024-06-10 11:35:05.191753] Starting SPDK v24.09-pre git sha1 a3f6419f1 / DPDK 24.03.0 initialization... 00:24:21.261 [2024-06-10 11:35:05.191831] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3322867 ] 00:24:21.520 EAL: No free 2048 kB hugepages reported on node 1 00:24:21.520 [2024-06-10 11:35:05.272442] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:21.520 [2024-06-10 11:35:05.361109] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:24:21.520 [2024-06-10 11:35:05.408263] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:24:21.779 [2024-06-10 11:35:05.479164] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:24:21.779 A filename is required. 00:24:21.779 11:35:05 accel.accel_missing_filename -- common/autotest_common.sh@652 -- # es=234 00:24:21.779 11:35:05 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:24:21.779 11:35:05 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # es=106 00:24:21.779 11:35:05 accel.accel_missing_filename -- common/autotest_common.sh@662 -- # case "$es" in 00:24:21.779 11:35:05 accel.accel_missing_filename -- common/autotest_common.sh@669 -- # es=1 00:24:21.779 11:35:05 accel.accel_missing_filename -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:24:21.779 00:24:21.779 real 0m0.395s 00:24:21.779 user 0m0.276s 00:24:21.779 sys 0m0.157s 00:24:21.779 11:35:05 accel.accel_missing_filename -- common/autotest_common.sh@1125 -- # xtrace_disable 00:24:21.779 11:35:05 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:24:21.779 ************************************ 00:24:21.779 END TEST accel_missing_filename 00:24:21.779 ************************************ 00:24:21.779 11:35:05 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:24:21.779 11:35:05 accel -- common/autotest_common.sh@1100 -- # '[' 10 -le 1 ']' 00:24:21.779 11:35:05 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:24:21.779 11:35:05 accel -- common/autotest_common.sh@10 -- # set +x 00:24:21.779 ************************************ 00:24:21.779 START TEST accel_compress_verify 00:24:21.779 ************************************ 00:24:21.779 11:35:05 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:24:21.779 11:35:05 accel.accel_compress_verify -- common/autotest_common.sh@649 -- # local es=0 00:24:21.779 11:35:05 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:24:21.779 11:35:05 accel.accel_compress_verify -- common/autotest_common.sh@637 -- # local arg=accel_perf 00:24:21.779 11:35:05 accel.accel_compress_verify -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:24:21.779 11:35:05 accel.accel_compress_verify -- common/autotest_common.sh@641 -- # type -t accel_perf 00:24:21.779 11:35:05 accel.accel_compress_verify -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:24:21.779 11:35:05 accel.accel_compress_verify -- common/autotest_common.sh@652 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:24:21.779 11:35:05 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:24:21.779 11:35:05 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:24:21.779 11:35:05 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:24:21.779 11:35:05 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:24:21.779 11:35:05 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:24:21.779 11:35:05 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:24:21.779 11:35:05 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:24:21.779 11:35:05 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:24:21.779 11:35:05 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:24:21.779 [2024-06-10 11:35:05.657744] Starting SPDK v24.09-pre git sha1 a3f6419f1 / DPDK 24.03.0 initialization... 00:24:21.779 [2024-06-10 11:35:05.657827] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3323044 ] 00:24:21.779 EAL: No free 2048 kB hugepages reported on node 1 00:24:22.039 [2024-06-10 11:35:05.739419] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:22.039 [2024-06-10 11:35:05.822604] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:24:22.039 [2024-06-10 11:35:05.864430] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:24:22.039 [2024-06-10 11:35:05.932160] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:24:22.299 00:24:22.299 Compression does not support the verify option, aborting. 00:24:22.299 11:35:06 accel.accel_compress_verify -- common/autotest_common.sh@652 -- # es=161 00:24:22.299 11:35:06 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:24:22.299 11:35:06 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # es=33 00:24:22.299 11:35:06 accel.accel_compress_verify -- common/autotest_common.sh@662 -- # case "$es" in 00:24:22.299 11:35:06 accel.accel_compress_verify -- common/autotest_common.sh@669 -- # es=1 00:24:22.299 11:35:06 accel.accel_compress_verify -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:24:22.299 00:24:22.299 real 0m0.381s 00:24:22.299 user 0m0.269s 00:24:22.299 sys 0m0.151s 00:24:22.299 11:35:06 accel.accel_compress_verify -- common/autotest_common.sh@1125 -- # xtrace_disable 00:24:22.299 11:35:06 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:24:22.299 ************************************ 00:24:22.299 END TEST accel_compress_verify 00:24:22.299 ************************************ 00:24:22.299 11:35:06 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:24:22.299 11:35:06 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:24:22.299 11:35:06 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:24:22.299 11:35:06 accel -- common/autotest_common.sh@10 -- # set +x 00:24:22.299 ************************************ 00:24:22.299 START TEST accel_wrong_workload 00:24:22.299 ************************************ 00:24:22.299 11:35:06 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # NOT accel_perf -t 1 -w foobar 00:24:22.299 11:35:06 accel.accel_wrong_workload -- common/autotest_common.sh@649 -- # local es=0 00:24:22.299 11:35:06 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:24:22.299 11:35:06 accel.accel_wrong_workload -- common/autotest_common.sh@637 -- # local arg=accel_perf 00:24:22.299 11:35:06 accel.accel_wrong_workload -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:24:22.299 11:35:06 accel.accel_wrong_workload -- common/autotest_common.sh@641 -- # type -t accel_perf 00:24:22.299 11:35:06 accel.accel_wrong_workload -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:24:22.299 11:35:06 accel.accel_wrong_workload -- common/autotest_common.sh@652 -- # accel_perf -t 1 -w foobar 00:24:22.299 11:35:06 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:24:22.299 11:35:06 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:24:22.299 11:35:06 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:24:22.299 11:35:06 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:24:22.299 11:35:06 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:24:22.299 11:35:06 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:24:22.299 11:35:06 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:24:22.299 11:35:06 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:24:22.299 11:35:06 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:24:22.299 Unsupported workload type: foobar 00:24:22.299 [2024-06-10 11:35:06.106801] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:24:22.299 accel_perf options: 00:24:22.299 [-h help message] 00:24:22.299 [-q queue depth per core] 00:24:22.299 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:24:22.299 [-T number of threads per core 00:24:22.299 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:24:22.299 [-t time in seconds] 00:24:22.299 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:24:22.299 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:24:22.299 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:24:22.299 [-l for compress/decompress workloads, name of uncompressed input file 00:24:22.299 [-S for crc32c workload, use this seed value (default 0) 00:24:22.299 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:24:22.299 [-f for fill workload, use this BYTE value (default 255) 00:24:22.299 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:24:22.299 [-y verify result if this switch is on] 00:24:22.300 [-a tasks to allocate per core (default: same value as -q)] 00:24:22.300 Can be used to spread operations across a wider range of memory. 00:24:22.300 11:35:06 accel.accel_wrong_workload -- common/autotest_common.sh@652 -- # es=1 00:24:22.300 11:35:06 accel.accel_wrong_workload -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:24:22.300 11:35:06 accel.accel_wrong_workload -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:24:22.300 11:35:06 accel.accel_wrong_workload -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:24:22.300 00:24:22.300 real 0m0.030s 00:24:22.300 user 0m0.034s 00:24:22.300 sys 0m0.019s 00:24:22.300 11:35:06 accel.accel_wrong_workload -- common/autotest_common.sh@1125 -- # xtrace_disable 00:24:22.300 11:35:06 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:24:22.300 ************************************ 00:24:22.300 END TEST accel_wrong_workload 00:24:22.300 ************************************ 00:24:22.300 11:35:06 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:24:22.300 11:35:06 accel -- common/autotest_common.sh@1100 -- # '[' 10 -le 1 ']' 00:24:22.300 11:35:06 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:24:22.300 11:35:06 accel -- common/autotest_common.sh@10 -- # set +x 00:24:22.300 ************************************ 00:24:22.300 START TEST accel_negative_buffers 00:24:22.300 ************************************ 00:24:22.300 11:35:06 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:24:22.300 11:35:06 accel.accel_negative_buffers -- common/autotest_common.sh@649 -- # local es=0 00:24:22.300 11:35:06 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:24:22.300 11:35:06 accel.accel_negative_buffers -- common/autotest_common.sh@637 -- # local arg=accel_perf 00:24:22.300 11:35:06 accel.accel_negative_buffers -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:24:22.300 11:35:06 accel.accel_negative_buffers -- common/autotest_common.sh@641 -- # type -t accel_perf 00:24:22.300 11:35:06 accel.accel_negative_buffers -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:24:22.300 11:35:06 accel.accel_negative_buffers -- common/autotest_common.sh@652 -- # accel_perf -t 1 -w xor -y -x -1 00:24:22.300 11:35:06 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:24:22.300 11:35:06 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:24:22.300 11:35:06 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:24:22.300 11:35:06 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:24:22.300 11:35:06 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:24:22.300 11:35:06 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:24:22.300 11:35:06 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:24:22.300 11:35:06 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:24:22.300 11:35:06 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:24:22.300 -x option must be non-negative. 00:24:22.300 [2024-06-10 11:35:06.205969] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:24:22.300 accel_perf options: 00:24:22.300 [-h help message] 00:24:22.300 [-q queue depth per core] 00:24:22.300 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:24:22.300 [-T number of threads per core 00:24:22.300 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:24:22.300 [-t time in seconds] 00:24:22.300 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:24:22.300 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:24:22.300 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:24:22.300 [-l for compress/decompress workloads, name of uncompressed input file 00:24:22.300 [-S for crc32c workload, use this seed value (default 0) 00:24:22.300 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:24:22.300 [-f for fill workload, use this BYTE value (default 255) 00:24:22.300 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:24:22.300 [-y verify result if this switch is on] 00:24:22.300 [-a tasks to allocate per core (default: same value as -q)] 00:24:22.300 Can be used to spread operations across a wider range of memory. 00:24:22.300 11:35:06 accel.accel_negative_buffers -- common/autotest_common.sh@652 -- # es=1 00:24:22.300 11:35:06 accel.accel_negative_buffers -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:24:22.300 11:35:06 accel.accel_negative_buffers -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:24:22.300 11:35:06 accel.accel_negative_buffers -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:24:22.300 00:24:22.300 real 0m0.028s 00:24:22.300 user 0m0.013s 00:24:22.300 sys 0m0.015s 00:24:22.300 11:35:06 accel.accel_negative_buffers -- common/autotest_common.sh@1125 -- # xtrace_disable 00:24:22.300 11:35:06 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:24:22.300 ************************************ 00:24:22.300 END TEST accel_negative_buffers 00:24:22.300 ************************************ 00:24:22.300 Error: writing output failed: Broken pipe 00:24:22.300 11:35:06 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:24:22.300 11:35:06 accel -- common/autotest_common.sh@1100 -- # '[' 9 -le 1 ']' 00:24:22.300 11:35:06 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:24:22.300 11:35:06 accel -- common/autotest_common.sh@10 -- # set +x 00:24:22.560 ************************************ 00:24:22.560 START TEST accel_crc32c 00:24:22.560 ************************************ 00:24:22.560 11:35:06 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w crc32c -S 32 -y 00:24:22.560 11:35:06 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:24:22.560 11:35:06 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:24:22.560 11:35:06 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:24:22.560 11:35:06 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:24:22.560 11:35:06 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:24:22.560 11:35:06 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:24:22.560 11:35:06 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:24:22.560 11:35:06 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:24:22.560 11:35:06 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:24:22.560 11:35:06 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:24:22.560 11:35:06 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:24:22.560 11:35:06 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:24:22.560 11:35:06 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:24:22.560 11:35:06 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:24:22.560 [2024-06-10 11:35:06.291354] Starting SPDK v24.09-pre git sha1 a3f6419f1 / DPDK 24.03.0 initialization... 00:24:22.560 [2024-06-10 11:35:06.291404] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3323118 ] 00:24:22.560 EAL: No free 2048 kB hugepages reported on node 1 00:24:22.560 [2024-06-10 11:35:06.369128] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:22.560 [2024-06-10 11:35:06.455316] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:24:22.560 11:35:06 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:24:22.560 11:35:06 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:24:22.560 11:35:06 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:24:22.560 11:35:06 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:24:22.560 11:35:06 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:24:22.560 11:35:06 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:24:22.560 11:35:06 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:24:22.560 11:35:06 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:24:22.560 11:35:06 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:24:22.560 11:35:06 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:24:22.560 11:35:06 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:24:22.560 11:35:06 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:24:22.560 11:35:06 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:24:22.560 11:35:06 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:24:22.560 11:35:06 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:24:22.560 11:35:06 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:24:22.560 11:35:06 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:24:22.560 11:35:06 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:24:22.560 11:35:06 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:24:22.560 11:35:06 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:24:22.560 11:35:06 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:24:22.560 11:35:06 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:24:22.560 11:35:06 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:24:22.560 11:35:06 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:24:22.560 11:35:06 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:24:22.560 11:35:06 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:24:22.560 11:35:06 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:24:22.560 11:35:06 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:24:22.560 11:35:06 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:24:22.560 11:35:06 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:24:22.820 11:35:06 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:24:22.820 11:35:06 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:24:22.820 11:35:06 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:24:22.820 11:35:06 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:24:22.820 11:35:06 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:24:22.820 11:35:06 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:24:22.820 11:35:06 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:24:22.820 11:35:06 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:24:22.820 11:35:06 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:24:22.820 11:35:06 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:24:22.820 11:35:06 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:24:22.820 11:35:06 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:24:22.820 11:35:06 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:24:22.820 11:35:06 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:24:22.820 11:35:06 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:24:22.820 11:35:06 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:24:22.820 11:35:06 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:24:22.820 11:35:06 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:24:22.820 11:35:06 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:24:22.820 11:35:06 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:24:22.820 11:35:06 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:24:22.820 11:35:06 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:24:22.820 11:35:06 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:24:22.820 11:35:06 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:24:22.820 11:35:06 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:24:22.820 11:35:06 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:24:22.820 11:35:06 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:24:22.820 11:35:06 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:24:22.820 11:35:06 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:24:22.820 11:35:06 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:24:22.820 11:35:06 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:24:22.820 11:35:06 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:24:22.820 11:35:06 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:24:22.820 11:35:06 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:24:22.820 11:35:06 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:24:22.820 11:35:06 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:24:22.820 11:35:06 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:24:22.820 11:35:06 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:24:22.820 11:35:06 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:24:22.820 11:35:06 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:24:23.758 11:35:07 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:24:23.758 11:35:07 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:24:23.758 11:35:07 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:24:23.758 11:35:07 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:24:23.758 11:35:07 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:24:23.759 11:35:07 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:24:23.759 11:35:07 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:24:23.759 11:35:07 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:24:23.759 11:35:07 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:24:23.759 11:35:07 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:24:23.759 11:35:07 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:24:23.759 11:35:07 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:24:23.759 11:35:07 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:24:23.759 11:35:07 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:24:23.759 11:35:07 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:24:23.759 11:35:07 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:24:23.759 11:35:07 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:24:23.759 11:35:07 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:24:23.759 11:35:07 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:24:23.759 11:35:07 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:24:23.759 11:35:07 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:24:23.759 11:35:07 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:24:23.759 11:35:07 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:24:23.759 11:35:07 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:24:23.759 11:35:07 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:24:23.759 11:35:07 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:24:23.759 11:35:07 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:23.759 00:24:23.759 real 0m1.382s 00:24:23.759 user 0m1.246s 00:24:23.759 sys 0m0.148s 00:24:23.759 11:35:07 accel.accel_crc32c -- common/autotest_common.sh@1125 -- # xtrace_disable 00:24:23.759 11:35:07 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:24:23.759 ************************************ 00:24:23.759 END TEST accel_crc32c 00:24:23.759 ************************************ 00:24:23.759 11:35:07 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:24:23.759 11:35:07 accel -- common/autotest_common.sh@1100 -- # '[' 9 -le 1 ']' 00:24:23.759 11:35:07 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:24:23.759 11:35:07 accel -- common/autotest_common.sh@10 -- # set +x 00:24:24.018 ************************************ 00:24:24.018 START TEST accel_crc32c_C2 00:24:24.018 ************************************ 00:24:24.018 11:35:07 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w crc32c -y -C 2 00:24:24.018 11:35:07 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:24:24.018 11:35:07 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:24:24.018 11:35:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:24:24.018 11:35:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:24:24.018 11:35:07 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:24:24.018 11:35:07 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:24:24.018 11:35:07 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:24:24.018 11:35:07 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:24:24.018 11:35:07 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:24:24.018 11:35:07 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:24:24.018 11:35:07 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:24:24.018 11:35:07 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:24:24.018 11:35:07 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:24:24.018 11:35:07 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:24:24.018 [2024-06-10 11:35:07.756598] Starting SPDK v24.09-pre git sha1 a3f6419f1 / DPDK 24.03.0 initialization... 00:24:24.018 [2024-06-10 11:35:07.756687] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3323316 ] 00:24:24.018 EAL: No free 2048 kB hugepages reported on node 1 00:24:24.018 [2024-06-10 11:35:07.836103] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:24.018 [2024-06-10 11:35:07.922399] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:24:24.278 11:35:07 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:24:24.278 11:35:07 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:24:24.278 11:35:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:24:24.278 11:35:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:24:24.278 11:35:07 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:24:24.278 11:35:07 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:24:24.278 11:35:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:24:24.278 11:35:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:24:24.278 11:35:07 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:24:24.278 11:35:07 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:24:24.278 11:35:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:24:24.278 11:35:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:24:24.278 11:35:07 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:24:24.278 11:35:07 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:24:24.278 11:35:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:24:24.278 11:35:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:24:24.278 11:35:07 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:24:24.278 11:35:07 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:24:24.278 11:35:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:24:24.278 11:35:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:24:24.278 11:35:07 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:24:24.278 11:35:07 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:24:24.278 11:35:07 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:24:24.278 11:35:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:24:24.278 11:35:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:24:24.278 11:35:07 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:24:24.278 11:35:07 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:24:24.278 11:35:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:24:24.278 11:35:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:24:24.278 11:35:07 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:24:24.278 11:35:07 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:24:24.278 11:35:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:24:24.278 11:35:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:24:24.278 11:35:07 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:24:24.278 11:35:07 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:24:24.278 11:35:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:24:24.278 11:35:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:24:24.278 11:35:07 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:24:24.278 11:35:07 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:24:24.278 11:35:07 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:24:24.278 11:35:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:24:24.278 11:35:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:24:24.278 11:35:07 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:24:24.278 11:35:07 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:24:24.278 11:35:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:24:24.278 11:35:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:24:24.278 11:35:07 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:24:24.278 11:35:07 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:24:24.278 11:35:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:24:24.278 11:35:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:24:24.278 11:35:07 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:24:24.278 11:35:07 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:24:24.278 11:35:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:24:24.278 11:35:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:24:24.278 11:35:07 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:24:24.278 11:35:07 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:24:24.278 11:35:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:24:24.278 11:35:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:24:24.278 11:35:07 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:24:24.278 11:35:07 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:24:24.278 11:35:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:24:24.278 11:35:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:24:24.278 11:35:07 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:24:24.278 11:35:07 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:24:24.278 11:35:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:24:24.278 11:35:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:24:24.278 11:35:07 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:24:24.278 11:35:07 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:24:24.278 11:35:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:24:24.278 11:35:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:24:25.286 11:35:09 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:24:25.286 11:35:09 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:24:25.286 11:35:09 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:24:25.286 11:35:09 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:24:25.286 11:35:09 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:24:25.286 11:35:09 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:24:25.286 11:35:09 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:24:25.286 11:35:09 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:24:25.286 11:35:09 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:24:25.286 11:35:09 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:24:25.286 11:35:09 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:24:25.286 11:35:09 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:24:25.286 11:35:09 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:24:25.286 11:35:09 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:24:25.286 11:35:09 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:24:25.286 11:35:09 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:24:25.286 11:35:09 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:24:25.286 11:35:09 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:24:25.286 11:35:09 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:24:25.286 11:35:09 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:24:25.286 11:35:09 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:24:25.286 11:35:09 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:24:25.286 11:35:09 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:24:25.286 11:35:09 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:24:25.286 11:35:09 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:24:25.286 11:35:09 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:24:25.286 11:35:09 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:25.286 00:24:25.286 real 0m1.398s 00:24:25.286 user 0m1.254s 00:24:25.286 sys 0m0.157s 00:24:25.286 11:35:09 accel.accel_crc32c_C2 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:24:25.286 11:35:09 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:24:25.286 ************************************ 00:24:25.286 END TEST accel_crc32c_C2 00:24:25.286 ************************************ 00:24:25.286 11:35:09 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:24:25.286 11:35:09 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:24:25.286 11:35:09 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:24:25.286 11:35:09 accel -- common/autotest_common.sh@10 -- # set +x 00:24:25.286 ************************************ 00:24:25.286 START TEST accel_copy 00:24:25.286 ************************************ 00:24:25.286 11:35:09 accel.accel_copy -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w copy -y 00:24:25.286 11:35:09 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:24:25.286 11:35:09 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:24:25.286 11:35:09 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:24:25.286 11:35:09 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:24:25.286 11:35:09 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:24:25.286 11:35:09 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:24:25.286 11:35:09 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:24:25.286 11:35:09 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:24:25.286 11:35:09 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:24:25.286 11:35:09 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:24:25.286 11:35:09 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:24:25.286 11:35:09 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:24:25.286 11:35:09 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:24:25.286 11:35:09 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:24:25.286 [2024-06-10 11:35:09.219949] Starting SPDK v24.09-pre git sha1 a3f6419f1 / DPDK 24.03.0 initialization... 00:24:25.286 [2024-06-10 11:35:09.220007] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3323515 ] 00:24:25.545 EAL: No free 2048 kB hugepages reported on node 1 00:24:25.545 [2024-06-10 11:35:09.297057] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:25.545 [2024-06-10 11:35:09.381123] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:24:25.545 11:35:09 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:24:25.545 11:35:09 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:24:25.546 11:35:09 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:24:25.546 11:35:09 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:24:25.546 11:35:09 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:24:25.546 11:35:09 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:24:25.546 11:35:09 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:24:25.546 11:35:09 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:24:25.546 11:35:09 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:24:25.546 11:35:09 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:24:25.546 11:35:09 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:24:25.546 11:35:09 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:24:25.546 11:35:09 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:24:25.546 11:35:09 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:24:25.546 11:35:09 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:24:25.546 11:35:09 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:24:25.546 11:35:09 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:24:25.546 11:35:09 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:24:25.546 11:35:09 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:24:25.546 11:35:09 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:24:25.546 11:35:09 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:24:25.546 11:35:09 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:24:25.546 11:35:09 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:24:25.546 11:35:09 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:24:25.546 11:35:09 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:24:25.546 11:35:09 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:24:25.546 11:35:09 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:24:25.546 11:35:09 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:24:25.546 11:35:09 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:24:25.546 11:35:09 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:24:25.546 11:35:09 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:24:25.546 11:35:09 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:24:25.546 11:35:09 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:24:25.546 11:35:09 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:24:25.546 11:35:09 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:24:25.546 11:35:09 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:24:25.546 11:35:09 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:24:25.546 11:35:09 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:24:25.546 11:35:09 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:24:25.546 11:35:09 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:24:25.546 11:35:09 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:24:25.546 11:35:09 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:24:25.546 11:35:09 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:24:25.546 11:35:09 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:24:25.546 11:35:09 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:24:25.546 11:35:09 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:24:25.546 11:35:09 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:24:25.546 11:35:09 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:24:25.546 11:35:09 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:24:25.546 11:35:09 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:24:25.546 11:35:09 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:24:25.546 11:35:09 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:24:25.546 11:35:09 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:24:25.546 11:35:09 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:24:25.546 11:35:09 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:24:25.546 11:35:09 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:24:25.546 11:35:09 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:24:25.546 11:35:09 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:24:25.546 11:35:09 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:24:25.546 11:35:09 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:24:25.546 11:35:09 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:24:25.546 11:35:09 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:24:25.546 11:35:09 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:24:25.546 11:35:09 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:24:25.546 11:35:09 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:24:25.546 11:35:09 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:24:26.924 11:35:10 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:24:26.924 11:35:10 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:24:26.924 11:35:10 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:24:26.924 11:35:10 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:24:26.924 11:35:10 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:24:26.924 11:35:10 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:24:26.924 11:35:10 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:24:26.924 11:35:10 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:24:26.924 11:35:10 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:24:26.924 11:35:10 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:24:26.924 11:35:10 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:24:26.924 11:35:10 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:24:26.924 11:35:10 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:24:26.924 11:35:10 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:24:26.924 11:35:10 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:24:26.924 11:35:10 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:24:26.924 11:35:10 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:24:26.924 11:35:10 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:24:26.924 11:35:10 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:24:26.924 11:35:10 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:24:26.924 11:35:10 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:24:26.924 11:35:10 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:24:26.924 11:35:10 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:24:26.924 11:35:10 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:24:26.924 11:35:10 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:24:26.924 11:35:10 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:24:26.924 11:35:10 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:26.924 00:24:26.924 real 0m1.369s 00:24:26.924 user 0m1.244s 00:24:26.924 sys 0m0.137s 00:24:26.924 11:35:10 accel.accel_copy -- common/autotest_common.sh@1125 -- # xtrace_disable 00:24:26.924 11:35:10 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:24:26.924 ************************************ 00:24:26.924 END TEST accel_copy 00:24:26.924 ************************************ 00:24:26.924 11:35:10 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:24:26.924 11:35:10 accel -- common/autotest_common.sh@1100 -- # '[' 13 -le 1 ']' 00:24:26.924 11:35:10 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:24:26.924 11:35:10 accel -- common/autotest_common.sh@10 -- # set +x 00:24:26.924 ************************************ 00:24:26.924 START TEST accel_fill 00:24:26.924 ************************************ 00:24:26.924 11:35:10 accel.accel_fill -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:24:26.924 11:35:10 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:24:26.924 11:35:10 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:24:26.924 11:35:10 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:24:26.924 11:35:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:24:26.924 11:35:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:24:26.924 11:35:10 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:24:26.924 11:35:10 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:24:26.924 11:35:10 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:24:26.924 11:35:10 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:24:26.924 11:35:10 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:24:26.924 11:35:10 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:24:26.924 11:35:10 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:24:26.924 11:35:10 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:24:26.924 11:35:10 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:24:26.924 [2024-06-10 11:35:10.653762] Starting SPDK v24.09-pre git sha1 a3f6419f1 / DPDK 24.03.0 initialization... 00:24:26.924 [2024-06-10 11:35:10.653819] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3323712 ] 00:24:26.924 EAL: No free 2048 kB hugepages reported on node 1 00:24:26.924 [2024-06-10 11:35:10.729724] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:26.924 [2024-06-10 11:35:10.816692] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:24:26.924 11:35:10 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:24:26.924 11:35:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:24:26.924 11:35:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:24:26.924 11:35:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:24:26.924 11:35:10 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:24:26.924 11:35:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:24:26.924 11:35:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:24:26.924 11:35:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:24:26.924 11:35:10 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:24:26.924 11:35:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:24:26.924 11:35:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:24:26.924 11:35:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:24:26.924 11:35:10 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:24:26.924 11:35:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:24:26.924 11:35:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:24:26.924 11:35:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:24:26.924 11:35:10 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:24:26.924 11:35:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:24:26.924 11:35:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:24:26.924 11:35:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:24:26.924 11:35:10 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:24:26.924 11:35:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:24:26.924 11:35:10 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:24:26.924 11:35:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:24:26.924 11:35:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:24:26.924 11:35:10 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:24:26.924 11:35:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:24:26.924 11:35:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:24:26.924 11:35:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:24:26.924 11:35:10 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:24:26.924 11:35:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:24:26.924 11:35:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:24:26.924 11:35:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:24:26.924 11:35:10 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:24:26.924 11:35:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:24:26.924 11:35:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:24:26.924 11:35:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:24:27.183 11:35:10 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:24:27.183 11:35:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:24:27.183 11:35:10 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:24:27.183 11:35:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:24:27.183 11:35:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:24:27.183 11:35:10 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:24:27.183 11:35:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:24:27.183 11:35:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:24:27.183 11:35:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:24:27.183 11:35:10 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:24:27.183 11:35:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:24:27.183 11:35:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:24:27.184 11:35:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:24:27.184 11:35:10 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:24:27.184 11:35:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:24:27.184 11:35:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:24:27.184 11:35:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:24:27.184 11:35:10 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:24:27.184 11:35:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:24:27.184 11:35:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:24:27.184 11:35:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:24:27.184 11:35:10 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:24:27.184 11:35:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:24:27.184 11:35:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:24:27.184 11:35:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:24:27.184 11:35:10 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:24:27.184 11:35:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:24:27.184 11:35:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:24:27.184 11:35:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:24:27.184 11:35:10 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:24:27.184 11:35:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:24:27.184 11:35:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:24:27.184 11:35:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:24:28.123 11:35:12 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:24:28.123 11:35:12 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:24:28.123 11:35:12 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:24:28.123 11:35:12 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:24:28.123 11:35:12 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:24:28.123 11:35:12 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:24:28.123 11:35:12 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:24:28.123 11:35:12 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:24:28.123 11:35:12 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:24:28.123 11:35:12 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:24:28.123 11:35:12 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:24:28.123 11:35:12 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:24:28.123 11:35:12 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:24:28.123 11:35:12 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:24:28.123 11:35:12 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:24:28.123 11:35:12 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:24:28.123 11:35:12 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:24:28.123 11:35:12 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:24:28.123 11:35:12 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:24:28.123 11:35:12 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:24:28.123 11:35:12 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:24:28.123 11:35:12 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:24:28.123 11:35:12 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:24:28.123 11:35:12 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:24:28.123 11:35:12 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:24:28.123 11:35:12 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:24:28.123 11:35:12 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:28.123 00:24:28.123 real 0m1.377s 00:24:28.123 user 0m1.248s 00:24:28.123 sys 0m0.142s 00:24:28.123 11:35:12 accel.accel_fill -- common/autotest_common.sh@1125 -- # xtrace_disable 00:24:28.123 11:35:12 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:24:28.123 ************************************ 00:24:28.123 END TEST accel_fill 00:24:28.123 ************************************ 00:24:28.123 11:35:12 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:24:28.123 11:35:12 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:24:28.123 11:35:12 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:24:28.123 11:35:12 accel -- common/autotest_common.sh@10 -- # set +x 00:24:28.381 ************************************ 00:24:28.381 START TEST accel_copy_crc32c 00:24:28.381 ************************************ 00:24:28.381 11:35:12 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w copy_crc32c -y 00:24:28.381 11:35:12 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:24:28.381 11:35:12 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:24:28.381 11:35:12 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:24:28.381 11:35:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:24:28.381 11:35:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:24:28.381 11:35:12 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:24:28.381 11:35:12 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:24:28.381 11:35:12 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:24:28.381 11:35:12 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:24:28.381 11:35:12 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:24:28.381 11:35:12 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:24:28.381 11:35:12 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:24:28.381 11:35:12 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:24:28.381 11:35:12 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:24:28.381 [2024-06-10 11:35:12.100945] Starting SPDK v24.09-pre git sha1 a3f6419f1 / DPDK 24.03.0 initialization... 00:24:28.381 [2024-06-10 11:35:12.100989] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3323916 ] 00:24:28.381 EAL: No free 2048 kB hugepages reported on node 1 00:24:28.381 [2024-06-10 11:35:12.178773] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:28.381 [2024-06-10 11:35:12.264855] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:24:28.381 11:35:12 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:24:28.381 11:35:12 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:24:28.381 11:35:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:24:28.381 11:35:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:24:28.381 11:35:12 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:24:28.381 11:35:12 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:24:28.381 11:35:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:24:28.381 11:35:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:24:28.381 11:35:12 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:24:28.381 11:35:12 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:24:28.381 11:35:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:24:28.381 11:35:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:24:28.381 11:35:12 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:24:28.381 11:35:12 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:24:28.381 11:35:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:24:28.381 11:35:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:24:28.381 11:35:12 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:24:28.381 11:35:12 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:24:28.381 11:35:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:24:28.381 11:35:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:24:28.381 11:35:12 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:24:28.381 11:35:12 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:24:28.381 11:35:12 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:24:28.381 11:35:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:24:28.381 11:35:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:24:28.381 11:35:12 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:24:28.381 11:35:12 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:24:28.381 11:35:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:24:28.381 11:35:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:24:28.381 11:35:12 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:24:28.381 11:35:12 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:24:28.381 11:35:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:24:28.381 11:35:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:24:28.381 11:35:12 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:24:28.381 11:35:12 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:24:28.381 11:35:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:24:28.381 11:35:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:24:28.381 11:35:12 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:24:28.381 11:35:12 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:24:28.381 11:35:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:24:28.381 11:35:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:24:28.381 11:35:12 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:24:28.381 11:35:12 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:24:28.381 11:35:12 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:24:28.382 11:35:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:24:28.382 11:35:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:24:28.382 11:35:12 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:24:28.382 11:35:12 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:24:28.382 11:35:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:24:28.382 11:35:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:24:28.382 11:35:12 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:24:28.382 11:35:12 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:24:28.382 11:35:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:24:28.382 11:35:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:24:28.382 11:35:12 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:24:28.382 11:35:12 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:24:28.382 11:35:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:24:28.382 11:35:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:24:28.382 11:35:12 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:24:28.382 11:35:12 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:24:28.382 11:35:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:24:28.382 11:35:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:24:28.382 11:35:12 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:24:28.382 11:35:12 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:24:28.382 11:35:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:24:28.382 11:35:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:24:28.382 11:35:12 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:24:28.382 11:35:12 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:24:28.382 11:35:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:24:28.382 11:35:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:24:28.382 11:35:12 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:24:28.382 11:35:12 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:24:28.382 11:35:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:24:28.382 11:35:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:24:29.758 11:35:13 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:24:29.758 11:35:13 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:24:29.758 11:35:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:24:29.758 11:35:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:24:29.758 11:35:13 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:24:29.758 11:35:13 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:24:29.758 11:35:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:24:29.758 11:35:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:24:29.758 11:35:13 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:24:29.758 11:35:13 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:24:29.758 11:35:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:24:29.758 11:35:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:24:29.758 11:35:13 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:24:29.758 11:35:13 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:24:29.758 11:35:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:24:29.758 11:35:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:24:29.758 11:35:13 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:24:29.758 11:35:13 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:24:29.758 11:35:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:24:29.758 11:35:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:24:29.758 11:35:13 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:24:29.758 11:35:13 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:24:29.758 11:35:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:24:29.758 11:35:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:24:29.758 11:35:13 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:24:29.758 11:35:13 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:24:29.758 11:35:13 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:29.758 00:24:29.758 real 0m1.381s 00:24:29.758 user 0m1.253s 00:24:29.758 sys 0m0.143s 00:24:29.758 11:35:13 accel.accel_copy_crc32c -- common/autotest_common.sh@1125 -- # xtrace_disable 00:24:29.758 11:35:13 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:24:29.758 ************************************ 00:24:29.758 END TEST accel_copy_crc32c 00:24:29.758 ************************************ 00:24:29.758 11:35:13 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:24:29.758 11:35:13 accel -- common/autotest_common.sh@1100 -- # '[' 9 -le 1 ']' 00:24:29.758 11:35:13 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:24:29.758 11:35:13 accel -- common/autotest_common.sh@10 -- # set +x 00:24:29.758 ************************************ 00:24:29.758 START TEST accel_copy_crc32c_C2 00:24:29.758 ************************************ 00:24:29.758 11:35:13 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:24:29.758 11:35:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:24:29.758 11:35:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:24:29.758 11:35:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:24:29.758 11:35:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:24:29.758 11:35:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:24:29.758 11:35:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:24:29.758 11:35:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:24:29.758 11:35:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:24:29.758 11:35:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:24:29.758 11:35:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:24:29.758 11:35:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:24:29.758 11:35:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:24:29.758 11:35:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:24:29.758 11:35:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:24:29.758 [2024-06-10 11:35:13.564863] Starting SPDK v24.09-pre git sha1 a3f6419f1 / DPDK 24.03.0 initialization... 00:24:29.758 [2024-06-10 11:35:13.564943] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3324148 ] 00:24:29.758 EAL: No free 2048 kB hugepages reported on node 1 00:24:29.758 [2024-06-10 11:35:13.645213] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:30.017 [2024-06-10 11:35:13.734347] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:24:30.017 11:35:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:24:30.017 11:35:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:24:30.017 11:35:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:24:30.017 11:35:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:24:30.017 11:35:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:24:30.017 11:35:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:24:30.017 11:35:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:24:30.017 11:35:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:24:30.017 11:35:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:24:30.017 11:35:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:24:30.017 11:35:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:24:30.017 11:35:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:24:30.017 11:35:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:24:30.017 11:35:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:24:30.017 11:35:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:24:30.017 11:35:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:24:30.017 11:35:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:24:30.017 11:35:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:24:30.017 11:35:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:24:30.017 11:35:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:24:30.017 11:35:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:24:30.017 11:35:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:24:30.017 11:35:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:24:30.017 11:35:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:24:30.017 11:35:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:24:30.017 11:35:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:24:30.017 11:35:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:24:30.017 11:35:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:24:30.017 11:35:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:24:30.017 11:35:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:24:30.017 11:35:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:24:30.017 11:35:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:24:30.017 11:35:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:24:30.017 11:35:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:24:30.017 11:35:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:24:30.017 11:35:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:24:30.017 11:35:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:24:30.017 11:35:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:24:30.017 11:35:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:24:30.017 11:35:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:24:30.017 11:35:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:24:30.017 11:35:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:24:30.017 11:35:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:24:30.017 11:35:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:24:30.017 11:35:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:24:30.017 11:35:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:24:30.017 11:35:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:24:30.018 11:35:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:24:30.018 11:35:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:24:30.018 11:35:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:24:30.018 11:35:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:24:30.018 11:35:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:24:30.018 11:35:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:24:30.018 11:35:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:24:30.018 11:35:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:24:30.018 11:35:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:24:30.018 11:35:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:24:30.018 11:35:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:24:30.018 11:35:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:24:30.018 11:35:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:24:30.018 11:35:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:24:30.018 11:35:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:24:30.018 11:35:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:24:30.018 11:35:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:24:30.018 11:35:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:24:30.018 11:35:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:24:30.018 11:35:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:24:30.018 11:35:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:24:30.018 11:35:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:24:30.018 11:35:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:24:30.018 11:35:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:24:30.018 11:35:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:24:30.018 11:35:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:24:30.018 11:35:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:24:31.396 11:35:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:24:31.396 11:35:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:24:31.396 11:35:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:24:31.396 11:35:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:24:31.396 11:35:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:24:31.396 11:35:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:24:31.396 11:35:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:24:31.396 11:35:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:24:31.396 11:35:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:24:31.396 11:35:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:24:31.396 11:35:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:24:31.396 11:35:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:24:31.396 11:35:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:24:31.396 11:35:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:24:31.396 11:35:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:24:31.396 11:35:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:24:31.396 11:35:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:24:31.396 11:35:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:24:31.396 11:35:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:24:31.396 11:35:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:24:31.396 11:35:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:24:31.396 11:35:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:24:31.396 11:35:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:24:31.396 11:35:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:24:31.396 11:35:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:24:31.396 11:35:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:24:31.396 11:35:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:31.396 00:24:31.396 real 0m1.399s 00:24:31.396 user 0m1.258s 00:24:31.396 sys 0m0.154s 00:24:31.396 11:35:14 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:24:31.396 11:35:14 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:24:31.396 ************************************ 00:24:31.396 END TEST accel_copy_crc32c_C2 00:24:31.396 ************************************ 00:24:31.396 11:35:14 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:24:31.396 11:35:14 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:24:31.396 11:35:14 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:24:31.396 11:35:14 accel -- common/autotest_common.sh@10 -- # set +x 00:24:31.396 ************************************ 00:24:31.396 START TEST accel_dualcast 00:24:31.396 ************************************ 00:24:31.396 11:35:15 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w dualcast -y 00:24:31.396 11:35:15 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:24:31.396 11:35:15 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:24:31.396 11:35:15 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:24:31.396 11:35:15 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:24:31.396 11:35:15 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:24:31.396 11:35:15 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:24:31.396 11:35:15 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:24:31.396 11:35:15 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:24:31.396 11:35:15 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:24:31.396 11:35:15 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:24:31.396 11:35:15 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:24:31.396 11:35:15 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:24:31.396 11:35:15 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:24:31.396 11:35:15 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:24:31.396 [2024-06-10 11:35:15.037344] Starting SPDK v24.09-pre git sha1 a3f6419f1 / DPDK 24.03.0 initialization... 00:24:31.396 [2024-06-10 11:35:15.037426] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3324394 ] 00:24:31.396 EAL: No free 2048 kB hugepages reported on node 1 00:24:31.396 [2024-06-10 11:35:15.115574] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:31.396 [2024-06-10 11:35:15.201611] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:24:31.396 11:35:15 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:24:31.396 11:35:15 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:24:31.396 11:35:15 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:24:31.396 11:35:15 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:24:31.396 11:35:15 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:24:31.396 11:35:15 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:24:31.396 11:35:15 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:24:31.396 11:35:15 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:24:31.396 11:35:15 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:24:31.396 11:35:15 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:24:31.396 11:35:15 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:24:31.396 11:35:15 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:24:31.396 11:35:15 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:24:31.396 11:35:15 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:24:31.396 11:35:15 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:24:31.396 11:35:15 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:24:31.396 11:35:15 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:24:31.396 11:35:15 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:24:31.396 11:35:15 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:24:31.396 11:35:15 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:24:31.396 11:35:15 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:24:31.396 11:35:15 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:24:31.396 11:35:15 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:24:31.397 11:35:15 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:24:31.397 11:35:15 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:24:31.397 11:35:15 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:24:31.397 11:35:15 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:24:31.397 11:35:15 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:24:31.397 11:35:15 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:24:31.397 11:35:15 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:24:31.397 11:35:15 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:24:31.397 11:35:15 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:24:31.397 11:35:15 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:24:31.397 11:35:15 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:24:31.397 11:35:15 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:24:31.397 11:35:15 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:24:31.397 11:35:15 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:24:31.397 11:35:15 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:24:31.397 11:35:15 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:24:31.397 11:35:15 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:24:31.397 11:35:15 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:24:31.397 11:35:15 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:24:31.397 11:35:15 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:24:31.397 11:35:15 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:24:31.397 11:35:15 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:24:31.397 11:35:15 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:24:31.397 11:35:15 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:24:31.397 11:35:15 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:24:31.397 11:35:15 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:24:31.397 11:35:15 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:24:31.397 11:35:15 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:24:31.397 11:35:15 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:24:31.397 11:35:15 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:24:31.397 11:35:15 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:24:31.397 11:35:15 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:24:31.397 11:35:15 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:24:31.397 11:35:15 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:24:31.397 11:35:15 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:24:31.397 11:35:15 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:24:31.397 11:35:15 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:24:31.397 11:35:15 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:24:31.397 11:35:15 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:24:31.397 11:35:15 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:24:31.397 11:35:15 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:24:31.397 11:35:15 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:24:31.397 11:35:15 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:24:32.776 11:35:16 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:24:32.776 11:35:16 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:24:32.776 11:35:16 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:24:32.776 11:35:16 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:24:32.776 11:35:16 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:24:32.776 11:35:16 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:24:32.776 11:35:16 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:24:32.776 11:35:16 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:24:32.776 11:35:16 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:24:32.776 11:35:16 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:24:32.776 11:35:16 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:24:32.776 11:35:16 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:24:32.776 11:35:16 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:24:32.776 11:35:16 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:24:32.776 11:35:16 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:24:32.776 11:35:16 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:24:32.776 11:35:16 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:24:32.776 11:35:16 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:24:32.776 11:35:16 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:24:32.776 11:35:16 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:24:32.776 11:35:16 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:24:32.776 11:35:16 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:24:32.776 11:35:16 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:24:32.776 11:35:16 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:24:32.777 11:35:16 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:24:32.777 11:35:16 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:24:32.777 11:35:16 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:32.777 00:24:32.777 real 0m1.393s 00:24:32.777 user 0m1.254s 00:24:32.777 sys 0m0.152s 00:24:32.777 11:35:16 accel.accel_dualcast -- common/autotest_common.sh@1125 -- # xtrace_disable 00:24:32.777 11:35:16 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:24:32.777 ************************************ 00:24:32.777 END TEST accel_dualcast 00:24:32.777 ************************************ 00:24:32.777 11:35:16 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:24:32.777 11:35:16 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:24:32.777 11:35:16 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:24:32.777 11:35:16 accel -- common/autotest_common.sh@10 -- # set +x 00:24:32.777 ************************************ 00:24:32.777 START TEST accel_compare 00:24:32.777 ************************************ 00:24:32.777 11:35:16 accel.accel_compare -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w compare -y 00:24:32.777 11:35:16 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:24:32.777 11:35:16 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:24:32.777 11:35:16 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:24:32.777 11:35:16 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:24:32.777 11:35:16 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:24:32.777 11:35:16 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:24:32.777 11:35:16 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:24:32.777 11:35:16 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:24:32.777 11:35:16 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:24:32.777 11:35:16 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:24:32.777 11:35:16 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:24:32.777 11:35:16 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:24:32.777 11:35:16 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:24:32.777 11:35:16 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:24:32.777 [2024-06-10 11:35:16.497001] Starting SPDK v24.09-pre git sha1 a3f6419f1 / DPDK 24.03.0 initialization... 00:24:32.777 [2024-06-10 11:35:16.497056] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3324643 ] 00:24:32.777 EAL: No free 2048 kB hugepages reported on node 1 00:24:32.777 [2024-06-10 11:35:16.569822] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:32.777 [2024-06-10 11:35:16.652727] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:24:32.777 11:35:16 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:24:32.777 11:35:16 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:24:32.777 11:35:16 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:24:32.777 11:35:16 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:24:32.777 11:35:16 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:24:32.777 11:35:16 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:24:32.777 11:35:16 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:24:32.777 11:35:16 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:24:32.777 11:35:16 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:24:32.777 11:35:16 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:24:32.777 11:35:16 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:24:32.777 11:35:16 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:24:32.777 11:35:16 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:24:32.777 11:35:16 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:24:32.777 11:35:16 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:24:32.777 11:35:16 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:24:32.777 11:35:16 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:24:32.777 11:35:16 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:24:32.777 11:35:16 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:24:32.777 11:35:16 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:24:32.777 11:35:16 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:24:32.777 11:35:16 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:24:32.777 11:35:16 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:24:32.777 11:35:16 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:24:32.777 11:35:16 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:24:32.777 11:35:16 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:24:32.777 11:35:16 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:24:32.777 11:35:16 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:24:32.777 11:35:16 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:24:32.777 11:35:16 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:24:32.777 11:35:16 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:24:32.777 11:35:16 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:24:32.777 11:35:16 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:24:32.777 11:35:16 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:24:32.777 11:35:16 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:24:32.777 11:35:16 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:24:32.777 11:35:16 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:24:32.777 11:35:16 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:24:32.777 11:35:16 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:24:32.777 11:35:16 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:24:32.777 11:35:16 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:24:32.777 11:35:16 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:24:32.777 11:35:16 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:24:32.777 11:35:16 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:24:32.777 11:35:16 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:24:32.777 11:35:16 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:24:32.777 11:35:16 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:24:32.777 11:35:16 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:24:32.777 11:35:16 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:24:32.777 11:35:16 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:24:32.777 11:35:16 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:24:32.777 11:35:16 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:24:32.777 11:35:16 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:24:32.777 11:35:16 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:24:32.777 11:35:16 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:24:32.777 11:35:16 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:24:32.777 11:35:16 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:24:32.777 11:35:16 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:24:32.777 11:35:16 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:24:32.777 11:35:16 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:24:32.777 11:35:16 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:24:32.777 11:35:16 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:24:32.777 11:35:16 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:24:32.777 11:35:16 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:24:32.777 11:35:16 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:24:32.777 11:35:16 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:24:34.158 11:35:17 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:24:34.158 11:35:17 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:24:34.158 11:35:17 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:24:34.158 11:35:17 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:24:34.158 11:35:17 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:24:34.158 11:35:17 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:24:34.158 11:35:17 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:24:34.158 11:35:17 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:24:34.158 11:35:17 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:24:34.158 11:35:17 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:24:34.158 11:35:17 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:24:34.158 11:35:17 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:24:34.158 11:35:17 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:24:34.158 11:35:17 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:24:34.158 11:35:17 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:24:34.158 11:35:17 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:24:34.158 11:35:17 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:24:34.158 11:35:17 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:24:34.158 11:35:17 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:24:34.158 11:35:17 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:24:34.158 11:35:17 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:24:34.158 11:35:17 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:24:34.158 11:35:17 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:24:34.158 11:35:17 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:24:34.158 11:35:17 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:24:34.158 11:35:17 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:24:34.158 11:35:17 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:34.158 00:24:34.158 real 0m1.362s 00:24:34.158 user 0m1.247s 00:24:34.158 sys 0m0.127s 00:24:34.158 11:35:17 accel.accel_compare -- common/autotest_common.sh@1125 -- # xtrace_disable 00:24:34.158 11:35:17 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:24:34.158 ************************************ 00:24:34.158 END TEST accel_compare 00:24:34.158 ************************************ 00:24:34.158 11:35:17 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:24:34.158 11:35:17 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:24:34.158 11:35:17 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:24:34.158 11:35:17 accel -- common/autotest_common.sh@10 -- # set +x 00:24:34.158 ************************************ 00:24:34.158 START TEST accel_xor 00:24:34.158 ************************************ 00:24:34.158 11:35:17 accel.accel_xor -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w xor -y 00:24:34.158 11:35:17 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:24:34.158 11:35:17 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:24:34.158 11:35:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:24:34.158 11:35:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:24:34.158 11:35:17 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:24:34.158 11:35:17 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:24:34.158 11:35:17 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:24:34.158 11:35:17 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:24:34.158 11:35:17 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:24:34.158 11:35:17 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:24:34.158 11:35:17 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:24:34.158 11:35:17 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:24:34.158 11:35:17 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:24:34.158 11:35:17 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:24:34.158 [2024-06-10 11:35:17.947897] Starting SPDK v24.09-pre git sha1 a3f6419f1 / DPDK 24.03.0 initialization... 00:24:34.158 [2024-06-10 11:35:17.947982] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3324851 ] 00:24:34.158 EAL: No free 2048 kB hugepages reported on node 1 00:24:34.158 [2024-06-10 11:35:18.031018] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:34.417 [2024-06-10 11:35:18.119170] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:24:34.417 11:35:18 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:24:34.417 11:35:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:24:34.417 11:35:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:24:34.417 11:35:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:24:34.417 11:35:18 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:24:34.417 11:35:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:24:34.417 11:35:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:24:34.417 11:35:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:24:34.417 11:35:18 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:24:34.417 11:35:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:24:34.417 11:35:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:24:34.417 11:35:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:24:34.417 11:35:18 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:24:34.417 11:35:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:24:34.417 11:35:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:24:34.417 11:35:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:24:34.417 11:35:18 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:24:34.417 11:35:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:24:34.417 11:35:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:24:34.417 11:35:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:24:34.417 11:35:18 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:24:34.417 11:35:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:24:34.417 11:35:18 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:24:34.417 11:35:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:24:34.417 11:35:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:24:34.417 11:35:18 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:24:34.417 11:35:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:24:34.417 11:35:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:24:34.417 11:35:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:24:34.417 11:35:18 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:24:34.417 11:35:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:24:34.417 11:35:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:24:34.417 11:35:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:24:34.417 11:35:18 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:24:34.417 11:35:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:24:34.417 11:35:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:24:34.417 11:35:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:24:34.417 11:35:18 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:24:34.417 11:35:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:24:34.417 11:35:18 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:24:34.417 11:35:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:24:34.417 11:35:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:24:34.417 11:35:18 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:24:34.417 11:35:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:24:34.417 11:35:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:24:34.417 11:35:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:24:34.417 11:35:18 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:24:34.417 11:35:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:24:34.417 11:35:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:24:34.417 11:35:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:24:34.417 11:35:18 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:24:34.417 11:35:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:24:34.417 11:35:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:24:34.418 11:35:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:24:34.418 11:35:18 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:24:34.418 11:35:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:24:34.418 11:35:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:24:34.418 11:35:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:24:34.418 11:35:18 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:24:34.418 11:35:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:24:34.418 11:35:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:24:34.418 11:35:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:24:34.418 11:35:18 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:24:34.418 11:35:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:24:34.418 11:35:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:24:34.418 11:35:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:24:34.418 11:35:18 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:24:34.418 11:35:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:24:34.418 11:35:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:24:34.418 11:35:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:24:35.798 11:35:19 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:24:35.798 11:35:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:24:35.798 11:35:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:24:35.798 11:35:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:24:35.798 11:35:19 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:24:35.798 11:35:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:24:35.798 11:35:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:24:35.798 11:35:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:24:35.798 11:35:19 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:24:35.798 11:35:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:24:35.798 11:35:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:24:35.798 11:35:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:24:35.798 11:35:19 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:24:35.798 11:35:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:24:35.798 11:35:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:24:35.798 11:35:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:24:35.798 11:35:19 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:24:35.798 11:35:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:24:35.798 11:35:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:24:35.798 11:35:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:24:35.798 11:35:19 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:24:35.798 11:35:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:24:35.798 11:35:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:24:35.798 11:35:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:24:35.798 11:35:19 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:24:35.798 11:35:19 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:24:35.798 11:35:19 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:35.798 00:24:35.798 real 0m1.399s 00:24:35.798 user 0m1.265s 00:24:35.798 sys 0m0.148s 00:24:35.798 11:35:19 accel.accel_xor -- common/autotest_common.sh@1125 -- # xtrace_disable 00:24:35.798 11:35:19 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:24:35.798 ************************************ 00:24:35.798 END TEST accel_xor 00:24:35.798 ************************************ 00:24:35.798 11:35:19 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:24:35.798 11:35:19 accel -- common/autotest_common.sh@1100 -- # '[' 9 -le 1 ']' 00:24:35.798 11:35:19 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:24:35.798 11:35:19 accel -- common/autotest_common.sh@10 -- # set +x 00:24:35.798 ************************************ 00:24:35.798 START TEST accel_xor 00:24:35.798 ************************************ 00:24:35.798 11:35:19 accel.accel_xor -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w xor -y -x 3 00:24:35.798 11:35:19 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:24:35.798 11:35:19 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:24:35.798 11:35:19 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:24:35.798 11:35:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:24:35.798 11:35:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:24:35.798 11:35:19 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:24:35.798 11:35:19 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:24:35.798 11:35:19 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:24:35.798 11:35:19 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:24:35.798 11:35:19 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:24:35.798 11:35:19 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:24:35.798 11:35:19 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:24:35.798 11:35:19 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:24:35.798 11:35:19 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:24:35.798 [2024-06-10 11:35:19.402792] Starting SPDK v24.09-pre git sha1 a3f6419f1 / DPDK 24.03.0 initialization... 00:24:35.798 [2024-06-10 11:35:19.402851] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3325050 ] 00:24:35.798 EAL: No free 2048 kB hugepages reported on node 1 00:24:35.798 [2024-06-10 11:35:19.474190] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:35.798 [2024-06-10 11:35:19.560460] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:24:35.798 11:35:19 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:24:35.798 11:35:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:24:35.798 11:35:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:24:35.798 11:35:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:24:35.798 11:35:19 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:24:35.798 11:35:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:24:35.798 11:35:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:24:35.798 11:35:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:24:35.798 11:35:19 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:24:35.798 11:35:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:24:35.798 11:35:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:24:35.798 11:35:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:24:35.798 11:35:19 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:24:35.798 11:35:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:24:35.798 11:35:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:24:35.798 11:35:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:24:35.798 11:35:19 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:24:35.798 11:35:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:24:35.798 11:35:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:24:35.798 11:35:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:24:35.798 11:35:19 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:24:35.798 11:35:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:24:35.798 11:35:19 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:24:35.798 11:35:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:24:35.798 11:35:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:24:35.798 11:35:19 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:24:35.798 11:35:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:24:35.798 11:35:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:24:35.798 11:35:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:24:35.798 11:35:19 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:24:35.798 11:35:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:24:35.798 11:35:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:24:35.798 11:35:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:24:35.798 11:35:19 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:24:35.798 11:35:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:24:35.798 11:35:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:24:35.798 11:35:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:24:35.798 11:35:19 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:24:35.798 11:35:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:24:35.798 11:35:19 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:24:35.798 11:35:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:24:35.798 11:35:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:24:35.798 11:35:19 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:24:35.798 11:35:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:24:35.798 11:35:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:24:35.798 11:35:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:24:35.798 11:35:19 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:24:35.798 11:35:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:24:35.798 11:35:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:24:35.798 11:35:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:24:35.798 11:35:19 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:24:35.798 11:35:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:24:35.798 11:35:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:24:35.798 11:35:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:24:35.798 11:35:19 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:24:35.798 11:35:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:24:35.798 11:35:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:24:35.798 11:35:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:24:35.798 11:35:19 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:24:35.798 11:35:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:24:35.798 11:35:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:24:35.798 11:35:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:24:35.798 11:35:19 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:24:35.798 11:35:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:24:35.798 11:35:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:24:35.798 11:35:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:24:35.798 11:35:19 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:24:35.798 11:35:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:24:35.798 11:35:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:24:35.798 11:35:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:24:37.174 11:35:20 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:24:37.174 11:35:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:24:37.174 11:35:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:24:37.174 11:35:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:24:37.174 11:35:20 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:24:37.174 11:35:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:24:37.174 11:35:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:24:37.174 11:35:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:24:37.174 11:35:20 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:24:37.174 11:35:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:24:37.174 11:35:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:24:37.174 11:35:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:24:37.174 11:35:20 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:24:37.174 11:35:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:24:37.174 11:35:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:24:37.174 11:35:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:24:37.174 11:35:20 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:24:37.174 11:35:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:24:37.174 11:35:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:24:37.174 11:35:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:24:37.174 11:35:20 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:24:37.174 11:35:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:24:37.174 11:35:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:24:37.174 11:35:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:24:37.174 11:35:20 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:24:37.174 11:35:20 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:24:37.174 11:35:20 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:37.174 00:24:37.174 real 0m1.377s 00:24:37.174 user 0m1.248s 00:24:37.174 sys 0m0.142s 00:24:37.174 11:35:20 accel.accel_xor -- common/autotest_common.sh@1125 -- # xtrace_disable 00:24:37.174 11:35:20 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:24:37.174 ************************************ 00:24:37.174 END TEST accel_xor 00:24:37.174 ************************************ 00:24:37.174 11:35:20 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:24:37.174 11:35:20 accel -- common/autotest_common.sh@1100 -- # '[' 6 -le 1 ']' 00:24:37.174 11:35:20 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:24:37.174 11:35:20 accel -- common/autotest_common.sh@10 -- # set +x 00:24:37.174 ************************************ 00:24:37.174 START TEST accel_dif_verify 00:24:37.174 ************************************ 00:24:37.174 11:35:20 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w dif_verify 00:24:37.174 11:35:20 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:24:37.174 11:35:20 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:24:37.174 11:35:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:24:37.174 11:35:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:24:37.174 11:35:20 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:24:37.174 11:35:20 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:24:37.174 11:35:20 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:24:37.174 11:35:20 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:24:37.174 11:35:20 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:24:37.175 11:35:20 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:24:37.175 11:35:20 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:24:37.175 11:35:20 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:24:37.175 11:35:20 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:24:37.175 11:35:20 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:24:37.175 [2024-06-10 11:35:20.860086] Starting SPDK v24.09-pre git sha1 a3f6419f1 / DPDK 24.03.0 initialization... 00:24:37.175 [2024-06-10 11:35:20.860170] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3325244 ] 00:24:37.175 EAL: No free 2048 kB hugepages reported on node 1 00:24:37.175 [2024-06-10 11:35:20.938622] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:37.175 [2024-06-10 11:35:21.024386] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:24:37.175 11:35:21 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:24:37.175 11:35:21 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:24:37.175 11:35:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:24:37.175 11:35:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:24:37.175 11:35:21 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:24:37.175 11:35:21 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:24:37.175 11:35:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:24:37.175 11:35:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:24:37.175 11:35:21 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:24:37.175 11:35:21 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:24:37.175 11:35:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:24:37.175 11:35:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:24:37.175 11:35:21 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:24:37.175 11:35:21 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:24:37.175 11:35:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:24:37.175 11:35:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:24:37.175 11:35:21 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:24:37.175 11:35:21 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:24:37.175 11:35:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:24:37.175 11:35:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:24:37.175 11:35:21 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:24:37.175 11:35:21 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:24:37.175 11:35:21 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:24:37.175 11:35:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:24:37.175 11:35:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:24:37.175 11:35:21 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:24:37.175 11:35:21 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:24:37.175 11:35:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:24:37.175 11:35:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:24:37.175 11:35:21 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:24:37.175 11:35:21 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:24:37.175 11:35:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:24:37.175 11:35:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:24:37.175 11:35:21 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:24:37.175 11:35:21 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:24:37.175 11:35:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:24:37.175 11:35:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:24:37.175 11:35:21 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:24:37.175 11:35:21 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:24:37.175 11:35:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:24:37.175 11:35:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:24:37.175 11:35:21 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:24:37.175 11:35:21 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:24:37.175 11:35:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:24:37.175 11:35:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:24:37.175 11:35:21 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:24:37.175 11:35:21 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:24:37.175 11:35:21 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:24:37.175 11:35:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:24:37.175 11:35:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:24:37.175 11:35:21 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:24:37.175 11:35:21 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:24:37.175 11:35:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:24:37.175 11:35:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:24:37.175 11:35:21 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:24:37.175 11:35:21 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:24:37.175 11:35:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:24:37.175 11:35:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:24:37.175 11:35:21 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:24:37.175 11:35:21 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:24:37.175 11:35:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:24:37.175 11:35:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:24:37.175 11:35:21 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:24:37.175 11:35:21 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:24:37.175 11:35:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:24:37.175 11:35:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:24:37.175 11:35:21 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:24:37.175 11:35:21 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:24:37.175 11:35:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:24:37.175 11:35:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:24:37.175 11:35:21 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:24:37.175 11:35:21 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:24:37.175 11:35:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:24:37.175 11:35:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:24:37.175 11:35:21 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:24:37.175 11:35:21 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:24:37.175 11:35:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:24:37.175 11:35:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:24:38.551 11:35:22 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:24:38.551 11:35:22 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:24:38.551 11:35:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:24:38.551 11:35:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:24:38.551 11:35:22 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:24:38.551 11:35:22 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:24:38.551 11:35:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:24:38.551 11:35:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:24:38.551 11:35:22 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:24:38.551 11:35:22 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:24:38.551 11:35:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:24:38.551 11:35:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:24:38.551 11:35:22 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:24:38.551 11:35:22 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:24:38.551 11:35:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:24:38.551 11:35:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:24:38.551 11:35:22 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:24:38.551 11:35:22 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:24:38.551 11:35:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:24:38.551 11:35:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:24:38.551 11:35:22 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:24:38.551 11:35:22 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:24:38.551 11:35:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:24:38.551 11:35:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:24:38.551 11:35:22 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:24:38.551 11:35:22 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:24:38.551 11:35:22 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:38.551 00:24:38.551 real 0m1.391s 00:24:38.551 user 0m1.259s 00:24:38.551 sys 0m0.146s 00:24:38.551 11:35:22 accel.accel_dif_verify -- common/autotest_common.sh@1125 -- # xtrace_disable 00:24:38.551 11:35:22 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:24:38.551 ************************************ 00:24:38.551 END TEST accel_dif_verify 00:24:38.551 ************************************ 00:24:38.551 11:35:22 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:24:38.551 11:35:22 accel -- common/autotest_common.sh@1100 -- # '[' 6 -le 1 ']' 00:24:38.551 11:35:22 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:24:38.551 11:35:22 accel -- common/autotest_common.sh@10 -- # set +x 00:24:38.551 ************************************ 00:24:38.551 START TEST accel_dif_generate 00:24:38.551 ************************************ 00:24:38.551 11:35:22 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w dif_generate 00:24:38.551 11:35:22 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:24:38.551 11:35:22 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:24:38.551 11:35:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:24:38.551 11:35:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:24:38.551 11:35:22 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:24:38.551 11:35:22 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:24:38.551 11:35:22 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:24:38.551 11:35:22 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:24:38.551 11:35:22 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:24:38.551 11:35:22 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:24:38.551 11:35:22 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:24:38.551 11:35:22 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:24:38.551 11:35:22 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:24:38.551 11:35:22 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:24:38.551 [2024-06-10 11:35:22.326275] Starting SPDK v24.09-pre git sha1 a3f6419f1 / DPDK 24.03.0 initialization... 00:24:38.551 [2024-06-10 11:35:22.326356] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3325443 ] 00:24:38.551 EAL: No free 2048 kB hugepages reported on node 1 00:24:38.551 [2024-06-10 11:35:22.406863] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:38.551 [2024-06-10 11:35:22.493051] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:24:38.810 11:35:22 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:24:38.810 11:35:22 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:24:38.810 11:35:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:24:38.810 11:35:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:24:38.810 11:35:22 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:24:38.810 11:35:22 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:24:38.810 11:35:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:24:38.810 11:35:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:24:38.810 11:35:22 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:24:38.810 11:35:22 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:24:38.810 11:35:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:24:38.810 11:35:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:24:38.810 11:35:22 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:24:38.810 11:35:22 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:24:38.810 11:35:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:24:38.810 11:35:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:24:38.810 11:35:22 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:24:38.810 11:35:22 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:24:38.810 11:35:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:24:38.810 11:35:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:24:38.810 11:35:22 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:24:38.810 11:35:22 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:24:38.810 11:35:22 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:24:38.810 11:35:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:24:38.810 11:35:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:24:38.810 11:35:22 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:24:38.810 11:35:22 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:24:38.810 11:35:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:24:38.810 11:35:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:24:38.810 11:35:22 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:24:38.810 11:35:22 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:24:38.810 11:35:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:24:38.810 11:35:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:24:38.810 11:35:22 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:24:38.810 11:35:22 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:24:38.810 11:35:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:24:38.810 11:35:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:24:38.810 11:35:22 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:24:38.810 11:35:22 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:24:38.810 11:35:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:24:38.810 11:35:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:24:38.810 11:35:22 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:24:38.810 11:35:22 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:24:38.810 11:35:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:24:38.810 11:35:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:24:38.810 11:35:22 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:24:38.810 11:35:22 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:24:38.810 11:35:22 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:24:38.810 11:35:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:24:38.810 11:35:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:24:38.810 11:35:22 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:24:38.810 11:35:22 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:24:38.810 11:35:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:24:38.810 11:35:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:24:38.810 11:35:22 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:24:38.810 11:35:22 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:24:38.810 11:35:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:24:38.810 11:35:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:24:38.810 11:35:22 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:24:38.810 11:35:22 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:24:38.810 11:35:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:24:38.810 11:35:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:24:38.810 11:35:22 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:24:38.810 11:35:22 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:24:38.810 11:35:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:24:38.810 11:35:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:24:38.810 11:35:22 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:24:38.810 11:35:22 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:24:38.810 11:35:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:24:38.810 11:35:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:24:38.810 11:35:22 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:24:38.810 11:35:22 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:24:38.810 11:35:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:24:38.810 11:35:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:24:38.810 11:35:22 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:24:38.810 11:35:22 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:24:38.810 11:35:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:24:38.810 11:35:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:24:39.745 11:35:23 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:24:39.745 11:35:23 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:24:39.745 11:35:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:24:39.745 11:35:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:24:39.745 11:35:23 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:24:39.745 11:35:23 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:24:39.745 11:35:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:24:39.745 11:35:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:24:39.745 11:35:23 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:24:39.745 11:35:23 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:24:39.745 11:35:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:24:40.005 11:35:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:24:40.005 11:35:23 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:24:40.005 11:35:23 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:24:40.005 11:35:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:24:40.005 11:35:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:24:40.005 11:35:23 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:24:40.005 11:35:23 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:24:40.005 11:35:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:24:40.005 11:35:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:24:40.005 11:35:23 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:24:40.005 11:35:23 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:24:40.005 11:35:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:24:40.005 11:35:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:24:40.005 11:35:23 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:24:40.005 11:35:23 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:24:40.005 11:35:23 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:40.005 00:24:40.005 real 0m1.397s 00:24:40.005 user 0m1.262s 00:24:40.005 sys 0m0.149s 00:24:40.005 11:35:23 accel.accel_dif_generate -- common/autotest_common.sh@1125 -- # xtrace_disable 00:24:40.005 11:35:23 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:24:40.005 ************************************ 00:24:40.005 END TEST accel_dif_generate 00:24:40.005 ************************************ 00:24:40.005 11:35:23 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:24:40.005 11:35:23 accel -- common/autotest_common.sh@1100 -- # '[' 6 -le 1 ']' 00:24:40.005 11:35:23 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:24:40.005 11:35:23 accel -- common/autotest_common.sh@10 -- # set +x 00:24:40.005 ************************************ 00:24:40.005 START TEST accel_dif_generate_copy 00:24:40.005 ************************************ 00:24:40.005 11:35:23 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w dif_generate_copy 00:24:40.005 11:35:23 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:24:40.005 11:35:23 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:24:40.005 11:35:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:24:40.005 11:35:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:24:40.005 11:35:23 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:24:40.005 11:35:23 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:24:40.005 11:35:23 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:24:40.005 11:35:23 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:24:40.005 11:35:23 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:24:40.005 11:35:23 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:24:40.005 11:35:23 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:24:40.005 11:35:23 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:24:40.005 11:35:23 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:24:40.005 11:35:23 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:24:40.005 [2024-06-10 11:35:23.792470] Starting SPDK v24.09-pre git sha1 a3f6419f1 / DPDK 24.03.0 initialization... 00:24:40.005 [2024-06-10 11:35:23.792549] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3325638 ] 00:24:40.005 EAL: No free 2048 kB hugepages reported on node 1 00:24:40.005 [2024-06-10 11:35:23.869908] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:40.005 [2024-06-10 11:35:23.952220] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:24:40.264 11:35:23 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:24:40.264 11:35:23 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:24:40.264 11:35:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:24:40.264 11:35:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:24:40.264 11:35:23 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:24:40.264 11:35:23 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:24:40.264 11:35:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:24:40.264 11:35:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:24:40.264 11:35:23 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:24:40.264 11:35:23 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:24:40.264 11:35:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:24:40.264 11:35:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:24:40.264 11:35:23 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:24:40.264 11:35:23 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:24:40.264 11:35:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:24:40.264 11:35:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:24:40.264 11:35:23 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:24:40.264 11:35:23 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:24:40.264 11:35:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:24:40.264 11:35:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:24:40.264 11:35:23 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:24:40.264 11:35:23 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:24:40.264 11:35:23 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:24:40.264 11:35:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:24:40.264 11:35:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:24:40.264 11:35:23 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:24:40.264 11:35:24 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:24:40.264 11:35:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:24:40.264 11:35:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:24:40.264 11:35:24 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:24:40.264 11:35:24 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:24:40.264 11:35:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:24:40.264 11:35:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:24:40.264 11:35:24 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:24:40.264 11:35:24 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:24:40.264 11:35:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:24:40.264 11:35:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:24:40.264 11:35:24 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:24:40.264 11:35:24 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:24:40.264 11:35:24 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:24:40.264 11:35:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:24:40.264 11:35:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:24:40.264 11:35:24 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:24:40.264 11:35:24 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:24:40.264 11:35:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:24:40.264 11:35:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:24:40.264 11:35:24 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:24:40.264 11:35:24 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:24:40.264 11:35:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:24:40.265 11:35:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:24:40.265 11:35:24 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:24:40.265 11:35:24 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:24:40.265 11:35:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:24:40.265 11:35:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:24:40.265 11:35:24 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:24:40.265 11:35:24 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:24:40.265 11:35:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:24:40.265 11:35:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:24:40.265 11:35:24 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:24:40.265 11:35:24 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:24:40.265 11:35:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:24:40.265 11:35:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:24:40.265 11:35:24 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:24:40.265 11:35:24 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:24:40.265 11:35:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:24:40.265 11:35:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:24:40.265 11:35:24 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:24:40.265 11:35:24 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:24:40.265 11:35:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:24:40.265 11:35:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:24:41.199 11:35:25 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:24:41.199 11:35:25 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:24:41.199 11:35:25 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:24:41.199 11:35:25 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:24:41.199 11:35:25 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:24:41.199 11:35:25 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:24:41.199 11:35:25 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:24:41.199 11:35:25 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:24:41.199 11:35:25 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:24:41.199 11:35:25 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:24:41.199 11:35:25 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:24:41.199 11:35:25 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:24:41.199 11:35:25 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:24:41.199 11:35:25 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:24:41.199 11:35:25 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:24:41.199 11:35:25 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:24:41.199 11:35:25 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:24:41.199 11:35:25 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:24:41.199 11:35:25 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:24:41.199 11:35:25 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:24:41.199 11:35:25 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:24:41.199 11:35:25 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:24:41.199 11:35:25 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:24:41.199 11:35:25 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:24:41.199 11:35:25 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:24:41.199 11:35:25 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:24:41.199 11:35:25 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:41.199 00:24:41.199 real 0m1.372s 00:24:41.199 user 0m1.243s 00:24:41.199 sys 0m0.141s 00:24:41.199 11:35:25 accel.accel_dif_generate_copy -- common/autotest_common.sh@1125 -- # xtrace_disable 00:24:41.199 11:35:25 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:24:41.199 ************************************ 00:24:41.199 END TEST accel_dif_generate_copy 00:24:41.199 ************************************ 00:24:41.457 11:35:25 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:24:41.457 11:35:25 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:24:41.457 11:35:25 accel -- common/autotest_common.sh@1100 -- # '[' 8 -le 1 ']' 00:24:41.457 11:35:25 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:24:41.457 11:35:25 accel -- common/autotest_common.sh@10 -- # set +x 00:24:41.457 ************************************ 00:24:41.457 START TEST accel_comp 00:24:41.457 ************************************ 00:24:41.457 11:35:25 accel.accel_comp -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:24:41.457 11:35:25 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:24:41.457 11:35:25 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:24:41.457 11:35:25 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:24:41.457 11:35:25 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:24:41.457 11:35:25 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:24:41.457 11:35:25 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:24:41.457 11:35:25 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:24:41.457 11:35:25 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:24:41.457 11:35:25 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:24:41.457 11:35:25 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:24:41.457 11:35:25 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:24:41.457 11:35:25 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:24:41.457 11:35:25 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:24:41.457 11:35:25 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:24:41.457 [2024-06-10 11:35:25.238842] Starting SPDK v24.09-pre git sha1 a3f6419f1 / DPDK 24.03.0 initialization... 00:24:41.457 [2024-06-10 11:35:25.238909] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3325842 ] 00:24:41.457 EAL: No free 2048 kB hugepages reported on node 1 00:24:41.457 [2024-06-10 11:35:25.319374] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:41.457 [2024-06-10 11:35:25.405263] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:24:41.715 11:35:25 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:24:41.715 11:35:25 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:24:41.715 11:35:25 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:24:41.715 11:35:25 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:24:41.715 11:35:25 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:24:41.715 11:35:25 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:24:41.715 11:35:25 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:24:41.715 11:35:25 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:24:41.715 11:35:25 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:24:41.715 11:35:25 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:24:41.715 11:35:25 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:24:41.715 11:35:25 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:24:41.715 11:35:25 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:24:41.715 11:35:25 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:24:41.715 11:35:25 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:24:41.715 11:35:25 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:24:41.715 11:35:25 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:24:41.715 11:35:25 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:24:41.715 11:35:25 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:24:41.715 11:35:25 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:24:41.715 11:35:25 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:24:41.715 11:35:25 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:24:41.715 11:35:25 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:24:41.715 11:35:25 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:24:41.715 11:35:25 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:24:41.715 11:35:25 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:24:41.715 11:35:25 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:24:41.715 11:35:25 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:24:41.715 11:35:25 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:24:41.715 11:35:25 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:24:41.715 11:35:25 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:24:41.715 11:35:25 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:24:41.715 11:35:25 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:24:41.715 11:35:25 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:24:41.715 11:35:25 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:24:41.715 11:35:25 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:24:41.715 11:35:25 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:24:41.715 11:35:25 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:24:41.715 11:35:25 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:24:41.715 11:35:25 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:24:41.715 11:35:25 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:24:41.715 11:35:25 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:24:41.715 11:35:25 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:24:41.715 11:35:25 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:24:41.715 11:35:25 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:24:41.715 11:35:25 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:24:41.715 11:35:25 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:24:41.715 11:35:25 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:24:41.715 11:35:25 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:24:41.715 11:35:25 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:24:41.715 11:35:25 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:24:41.715 11:35:25 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:24:41.715 11:35:25 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:24:41.715 11:35:25 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:24:41.715 11:35:25 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:24:41.715 11:35:25 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:24:41.715 11:35:25 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:24:41.715 11:35:25 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:24:41.715 11:35:25 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:24:41.715 11:35:25 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:24:41.715 11:35:25 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:24:41.715 11:35:25 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:24:41.715 11:35:25 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:24:41.715 11:35:25 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:24:41.715 11:35:25 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:24:41.715 11:35:25 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:24:41.715 11:35:25 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:24:41.715 11:35:25 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:24:41.715 11:35:25 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:24:41.715 11:35:25 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:24:41.715 11:35:25 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:24:41.715 11:35:25 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:24:41.715 11:35:25 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:24:41.715 11:35:25 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:24:43.089 11:35:26 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:24:43.089 11:35:26 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:24:43.089 11:35:26 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:24:43.089 11:35:26 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:24:43.089 11:35:26 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:24:43.089 11:35:26 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:24:43.089 11:35:26 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:24:43.089 11:35:26 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:24:43.089 11:35:26 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:24:43.089 11:35:26 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:24:43.089 11:35:26 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:24:43.089 11:35:26 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:24:43.089 11:35:26 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:24:43.089 11:35:26 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:24:43.089 11:35:26 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:24:43.089 11:35:26 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:24:43.089 11:35:26 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:24:43.089 11:35:26 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:24:43.089 11:35:26 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:24:43.089 11:35:26 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:24:43.089 11:35:26 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:24:43.089 11:35:26 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:24:43.089 11:35:26 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:24:43.089 11:35:26 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:24:43.089 11:35:26 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:24:43.089 11:35:26 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:24:43.089 11:35:26 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:43.089 00:24:43.089 real 0m1.394s 00:24:43.089 user 0m1.264s 00:24:43.089 sys 0m0.145s 00:24:43.089 11:35:26 accel.accel_comp -- common/autotest_common.sh@1125 -- # xtrace_disable 00:24:43.089 11:35:26 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:24:43.089 ************************************ 00:24:43.089 END TEST accel_comp 00:24:43.089 ************************************ 00:24:43.089 11:35:26 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:24:43.089 11:35:26 accel -- common/autotest_common.sh@1100 -- # '[' 9 -le 1 ']' 00:24:43.089 11:35:26 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:24:43.089 11:35:26 accel -- common/autotest_common.sh@10 -- # set +x 00:24:43.089 ************************************ 00:24:43.089 START TEST accel_decomp 00:24:43.089 ************************************ 00:24:43.089 11:35:26 accel.accel_decomp -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:24:43.089 11:35:26 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:24:43.089 11:35:26 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:24:43.089 11:35:26 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:24:43.089 11:35:26 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:24:43.089 11:35:26 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:24:43.089 11:35:26 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:24:43.089 11:35:26 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:24:43.089 11:35:26 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:24:43.089 11:35:26 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:24:43.089 11:35:26 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:24:43.089 11:35:26 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:24:43.089 11:35:26 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:24:43.090 11:35:26 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:24:43.090 11:35:26 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:24:43.090 [2024-06-10 11:35:26.704459] Starting SPDK v24.09-pre git sha1 a3f6419f1 / DPDK 24.03.0 initialization... 00:24:43.090 [2024-06-10 11:35:26.704538] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3326033 ] 00:24:43.090 EAL: No free 2048 kB hugepages reported on node 1 00:24:43.090 [2024-06-10 11:35:26.783820] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:43.090 [2024-06-10 11:35:26.869466] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:24:43.090 11:35:26 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:24:43.090 11:35:26 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:24:43.090 11:35:26 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:24:43.090 11:35:26 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:24:43.090 11:35:26 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:24:43.090 11:35:26 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:24:43.090 11:35:26 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:24:43.090 11:35:26 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:24:43.090 11:35:26 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:24:43.090 11:35:26 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:24:43.090 11:35:26 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:24:43.090 11:35:26 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:24:43.090 11:35:26 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:24:43.090 11:35:26 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:24:43.090 11:35:26 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:24:43.090 11:35:26 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:24:43.090 11:35:26 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:24:43.090 11:35:26 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:24:43.090 11:35:26 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:24:43.090 11:35:26 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:24:43.090 11:35:26 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:24:43.090 11:35:26 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:24:43.090 11:35:26 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:24:43.090 11:35:26 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:24:43.090 11:35:26 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:24:43.090 11:35:26 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:24:43.090 11:35:26 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:24:43.090 11:35:26 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:24:43.090 11:35:26 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:24:43.090 11:35:26 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:24:43.090 11:35:26 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:24:43.090 11:35:26 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:24:43.090 11:35:26 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:24:43.090 11:35:26 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:24:43.090 11:35:26 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:24:43.090 11:35:26 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:24:43.090 11:35:26 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:24:43.090 11:35:26 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:24:43.090 11:35:26 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:24:43.090 11:35:26 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:24:43.090 11:35:26 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:24:43.090 11:35:26 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:24:43.090 11:35:26 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:24:43.090 11:35:26 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:24:43.090 11:35:26 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:24:43.090 11:35:26 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:24:43.090 11:35:26 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:24:43.090 11:35:26 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:24:43.090 11:35:26 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:24:43.090 11:35:26 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:24:43.090 11:35:26 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:24:43.090 11:35:26 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:24:43.090 11:35:26 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:24:43.090 11:35:26 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:24:43.090 11:35:26 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:24:43.090 11:35:26 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:24:43.090 11:35:26 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:24:43.090 11:35:26 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:24:43.090 11:35:26 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:24:43.090 11:35:26 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:24:43.090 11:35:26 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:24:43.090 11:35:26 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:24:43.090 11:35:26 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:24:43.090 11:35:26 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:24:43.090 11:35:26 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:24:43.090 11:35:26 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:24:43.090 11:35:26 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:24:43.090 11:35:26 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:24:43.090 11:35:26 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:24:43.090 11:35:26 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:24:43.090 11:35:26 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:24:43.090 11:35:26 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:24:43.090 11:35:26 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:24:43.090 11:35:26 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:24:44.460 11:35:28 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:24:44.460 11:35:28 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:24:44.460 11:35:28 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:24:44.460 11:35:28 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:24:44.460 11:35:28 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:24:44.460 11:35:28 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:24:44.460 11:35:28 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:24:44.460 11:35:28 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:24:44.460 11:35:28 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:24:44.460 11:35:28 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:24:44.461 11:35:28 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:24:44.461 11:35:28 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:24:44.461 11:35:28 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:24:44.461 11:35:28 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:24:44.461 11:35:28 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:24:44.461 11:35:28 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:24:44.461 11:35:28 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:24:44.461 11:35:28 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:24:44.461 11:35:28 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:24:44.461 11:35:28 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:24:44.461 11:35:28 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:24:44.461 11:35:28 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:24:44.461 11:35:28 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:24:44.461 11:35:28 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:24:44.461 11:35:28 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:24:44.461 11:35:28 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:24:44.461 11:35:28 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:44.461 00:24:44.461 real 0m1.391s 00:24:44.461 user 0m1.246s 00:24:44.461 sys 0m0.149s 00:24:44.461 11:35:28 accel.accel_decomp -- common/autotest_common.sh@1125 -- # xtrace_disable 00:24:44.461 11:35:28 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:24:44.461 ************************************ 00:24:44.461 END TEST accel_decomp 00:24:44.461 ************************************ 00:24:44.461 11:35:28 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 00:24:44.461 11:35:28 accel -- common/autotest_common.sh@1100 -- # '[' 11 -le 1 ']' 00:24:44.461 11:35:28 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:24:44.461 11:35:28 accel -- common/autotest_common.sh@10 -- # set +x 00:24:44.461 ************************************ 00:24:44.461 START TEST accel_decomp_full 00:24:44.461 ************************************ 00:24:44.461 11:35:28 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 00:24:44.461 11:35:28 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:24:44.461 11:35:28 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:24:44.461 11:35:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:24:44.461 11:35:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:24:44.461 11:35:28 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 00:24:44.461 11:35:28 accel.accel_decomp_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 00:24:44.461 11:35:28 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:24:44.461 11:35:28 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:24:44.461 11:35:28 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:24:44.461 11:35:28 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:24:44.461 11:35:28 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:24:44.461 11:35:28 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:24:44.461 11:35:28 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:24:44.461 11:35:28 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:24:44.461 [2024-06-10 11:35:28.149404] Starting SPDK v24.09-pre git sha1 a3f6419f1 / DPDK 24.03.0 initialization... 00:24:44.461 [2024-06-10 11:35:28.149484] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3326226 ] 00:24:44.461 EAL: No free 2048 kB hugepages reported on node 1 00:24:44.461 [2024-06-10 11:35:28.228633] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:44.461 [2024-06-10 11:35:28.314225] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:24:44.461 11:35:28 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:24:44.461 11:35:28 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:24:44.461 11:35:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:24:44.461 11:35:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:24:44.461 11:35:28 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:24:44.461 11:35:28 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:24:44.461 11:35:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:24:44.461 11:35:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:24:44.461 11:35:28 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:24:44.461 11:35:28 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:24:44.461 11:35:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:24:44.461 11:35:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:24:44.461 11:35:28 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:24:44.461 11:35:28 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:24:44.461 11:35:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:24:44.461 11:35:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:24:44.461 11:35:28 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:24:44.461 11:35:28 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:24:44.461 11:35:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:24:44.461 11:35:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:24:44.461 11:35:28 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:24:44.461 11:35:28 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:24:44.461 11:35:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:24:44.461 11:35:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:24:44.461 11:35:28 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:24:44.461 11:35:28 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:24:44.461 11:35:28 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:24:44.461 11:35:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:24:44.461 11:35:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:24:44.461 11:35:28 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:24:44.461 11:35:28 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:24:44.461 11:35:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:24:44.461 11:35:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:24:44.461 11:35:28 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:24:44.461 11:35:28 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:24:44.461 11:35:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:24:44.461 11:35:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:24:44.461 11:35:28 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:24:44.461 11:35:28 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:24:44.461 11:35:28 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:24:44.461 11:35:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:24:44.461 11:35:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:24:44.461 11:35:28 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:24:44.461 11:35:28 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:24:44.461 11:35:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:24:44.461 11:35:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:24:44.461 11:35:28 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:24:44.461 11:35:28 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:24:44.461 11:35:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:24:44.461 11:35:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:24:44.461 11:35:28 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:24:44.461 11:35:28 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:24:44.461 11:35:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:24:44.461 11:35:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:24:44.461 11:35:28 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:24:44.461 11:35:28 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:24:44.461 11:35:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:24:44.461 11:35:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:24:44.461 11:35:28 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:24:44.461 11:35:28 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:24:44.461 11:35:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:24:44.461 11:35:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:24:44.461 11:35:28 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:24:44.461 11:35:28 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:24:44.461 11:35:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:24:44.461 11:35:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:24:44.461 11:35:28 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:24:44.461 11:35:28 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:24:44.461 11:35:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:24:44.461 11:35:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:24:44.461 11:35:28 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:24:44.461 11:35:28 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:24:44.461 11:35:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:24:44.461 11:35:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:24:45.833 11:35:29 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:24:45.833 11:35:29 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:24:45.833 11:35:29 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:24:45.833 11:35:29 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:24:45.833 11:35:29 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:24:45.834 11:35:29 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:24:45.834 11:35:29 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:24:45.834 11:35:29 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:24:45.834 11:35:29 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:24:45.834 11:35:29 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:24:45.834 11:35:29 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:24:45.834 11:35:29 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:24:45.834 11:35:29 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:24:45.834 11:35:29 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:24:45.834 11:35:29 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:24:45.834 11:35:29 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:24:45.834 11:35:29 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:24:45.834 11:35:29 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:24:45.834 11:35:29 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:24:45.834 11:35:29 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:24:45.834 11:35:29 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:24:45.834 11:35:29 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:24:45.834 11:35:29 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:24:45.834 11:35:29 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:24:45.834 11:35:29 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:24:45.834 11:35:29 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:24:45.834 11:35:29 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:45.834 00:24:45.834 real 0m1.393s 00:24:45.834 user 0m1.245s 00:24:45.834 sys 0m0.152s 00:24:45.834 11:35:29 accel.accel_decomp_full -- common/autotest_common.sh@1125 -- # xtrace_disable 00:24:45.834 11:35:29 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:24:45.834 ************************************ 00:24:45.834 END TEST accel_decomp_full 00:24:45.834 ************************************ 00:24:45.834 11:35:29 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:24:45.834 11:35:29 accel -- common/autotest_common.sh@1100 -- # '[' 11 -le 1 ']' 00:24:45.834 11:35:29 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:24:45.834 11:35:29 accel -- common/autotest_common.sh@10 -- # set +x 00:24:45.834 ************************************ 00:24:45.834 START TEST accel_decomp_mcore 00:24:45.834 ************************************ 00:24:45.834 11:35:29 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:24:45.834 11:35:29 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:24:45.834 11:35:29 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:24:45.834 11:35:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:24:45.834 11:35:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:24:45.834 11:35:29 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:24:45.834 11:35:29 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:24:45.834 11:35:29 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:24:45.834 11:35:29 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:24:45.834 11:35:29 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:24:45.834 11:35:29 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:24:45.834 11:35:29 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:24:45.834 11:35:29 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:24:45.834 11:35:29 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:24:45.834 11:35:29 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:24:45.834 [2024-06-10 11:35:29.612441] Starting SPDK v24.09-pre git sha1 a3f6419f1 / DPDK 24.03.0 initialization... 00:24:45.834 [2024-06-10 11:35:29.612528] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3326426 ] 00:24:45.834 EAL: No free 2048 kB hugepages reported on node 1 00:24:45.834 [2024-06-10 11:35:29.691214] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:45.834 [2024-06-10 11:35:29.781220] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:24:45.834 [2024-06-10 11:35:29.781325] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:24:45.834 [2024-06-10 11:35:29.781345] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:24:45.834 [2024-06-10 11:35:29.781346] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:24:46.094 11:35:29 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:24:46.094 11:35:29 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:24:46.094 11:35:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:24:46.094 11:35:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:24:46.094 11:35:29 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:24:46.094 11:35:29 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:24:46.094 11:35:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:24:46.094 11:35:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:24:46.094 11:35:29 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:24:46.094 11:35:29 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:24:46.094 11:35:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:24:46.094 11:35:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:24:46.094 11:35:29 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:24:46.094 11:35:29 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:24:46.094 11:35:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:24:46.094 11:35:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:24:46.094 11:35:29 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:24:46.094 11:35:29 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:24:46.094 11:35:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:24:46.094 11:35:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:24:46.094 11:35:29 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:24:46.094 11:35:29 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:24:46.094 11:35:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:24:46.094 11:35:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:24:46.094 11:35:29 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:24:46.094 11:35:29 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:24:46.094 11:35:29 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:24:46.094 11:35:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:24:46.094 11:35:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:24:46.094 11:35:29 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:24:46.094 11:35:29 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:24:46.094 11:35:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:24:46.094 11:35:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:24:46.094 11:35:29 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:24:46.094 11:35:29 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:24:46.094 11:35:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:24:46.094 11:35:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:24:46.094 11:35:29 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:24:46.094 11:35:29 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:24:46.094 11:35:29 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:24:46.094 11:35:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:24:46.094 11:35:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:24:46.094 11:35:29 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:24:46.094 11:35:29 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:24:46.094 11:35:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:24:46.094 11:35:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:24:46.094 11:35:29 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:24:46.094 11:35:29 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:24:46.094 11:35:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:24:46.094 11:35:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:24:46.094 11:35:29 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:24:46.094 11:35:29 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:24:46.094 11:35:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:24:46.094 11:35:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:24:46.094 11:35:29 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:24:46.094 11:35:29 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:24:46.094 11:35:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:24:46.094 11:35:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:24:46.094 11:35:29 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:24:46.094 11:35:29 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:24:46.094 11:35:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:24:46.094 11:35:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:24:46.094 11:35:29 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:24:46.094 11:35:29 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:24:46.094 11:35:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:24:46.094 11:35:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:24:46.094 11:35:29 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:24:46.094 11:35:29 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:24:46.094 11:35:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:24:46.094 11:35:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:24:46.094 11:35:29 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:24:46.094 11:35:29 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:24:46.094 11:35:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:24:46.094 11:35:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:24:47.472 11:35:30 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:24:47.472 11:35:30 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:24:47.472 11:35:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:24:47.472 11:35:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:24:47.472 11:35:30 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:24:47.472 11:35:30 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:24:47.472 11:35:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:24:47.472 11:35:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:24:47.472 11:35:30 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:24:47.472 11:35:30 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:24:47.472 11:35:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:24:47.472 11:35:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:24:47.472 11:35:30 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:24:47.472 11:35:30 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:24:47.472 11:35:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:24:47.472 11:35:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:24:47.472 11:35:30 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:24:47.472 11:35:30 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:24:47.472 11:35:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:24:47.472 11:35:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:24:47.472 11:35:30 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:24:47.472 11:35:30 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:24:47.472 11:35:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:24:47.472 11:35:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:24:47.472 11:35:30 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:24:47.472 11:35:30 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:24:47.472 11:35:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:24:47.472 11:35:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:24:47.472 11:35:30 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:24:47.472 11:35:30 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:24:47.472 11:35:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:24:47.472 11:35:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:24:47.472 11:35:30 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:24:47.472 11:35:30 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:24:47.472 11:35:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:24:47.472 11:35:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:24:47.472 11:35:30 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:24:47.472 11:35:30 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:24:47.472 11:35:30 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:47.472 00:24:47.472 real 0m1.409s 00:24:47.472 user 0m4.643s 00:24:47.472 sys 0m0.160s 00:24:47.472 11:35:30 accel.accel_decomp_mcore -- common/autotest_common.sh@1125 -- # xtrace_disable 00:24:47.472 11:35:31 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:24:47.472 ************************************ 00:24:47.472 END TEST accel_decomp_mcore 00:24:47.472 ************************************ 00:24:47.472 11:35:31 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:24:47.472 11:35:31 accel -- common/autotest_common.sh@1100 -- # '[' 13 -le 1 ']' 00:24:47.472 11:35:31 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:24:47.472 11:35:31 accel -- common/autotest_common.sh@10 -- # set +x 00:24:47.472 ************************************ 00:24:47.472 START TEST accel_decomp_full_mcore 00:24:47.472 ************************************ 00:24:47.472 11:35:31 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:24:47.472 11:35:31 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:24:47.472 11:35:31 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:24:47.472 11:35:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:24:47.472 11:35:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:24:47.472 11:35:31 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:24:47.472 11:35:31 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:24:47.472 11:35:31 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:24:47.472 11:35:31 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:24:47.472 11:35:31 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:24:47.472 11:35:31 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:24:47.472 11:35:31 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:24:47.472 11:35:31 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:24:47.472 11:35:31 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:24:47.472 11:35:31 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:24:47.472 [2024-06-10 11:35:31.096099] Starting SPDK v24.09-pre git sha1 a3f6419f1 / DPDK 24.03.0 initialization... 00:24:47.472 [2024-06-10 11:35:31.096184] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3326630 ] 00:24:47.472 EAL: No free 2048 kB hugepages reported on node 1 00:24:47.472 [2024-06-10 11:35:31.174298] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:47.472 [2024-06-10 11:35:31.264056] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:24:47.472 [2024-06-10 11:35:31.264070] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:24:47.472 [2024-06-10 11:35:31.264131] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:24:47.472 [2024-06-10 11:35:31.264133] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:24:47.472 11:35:31 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:24:47.472 11:35:31 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:24:47.472 11:35:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:24:47.472 11:35:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:24:47.472 11:35:31 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:24:47.472 11:35:31 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:24:47.472 11:35:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:24:47.472 11:35:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:24:47.472 11:35:31 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:24:47.472 11:35:31 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:24:47.472 11:35:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:24:47.472 11:35:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:24:47.472 11:35:31 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:24:47.472 11:35:31 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:24:47.472 11:35:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:24:47.472 11:35:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:24:47.472 11:35:31 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:24:47.472 11:35:31 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:24:47.472 11:35:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:24:47.472 11:35:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:24:47.472 11:35:31 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:24:47.472 11:35:31 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:24:47.472 11:35:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:24:47.472 11:35:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:24:47.472 11:35:31 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:24:47.472 11:35:31 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:24:47.472 11:35:31 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:24:47.472 11:35:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:24:47.472 11:35:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:24:47.472 11:35:31 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:24:47.472 11:35:31 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:24:47.472 11:35:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:24:47.472 11:35:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:24:47.472 11:35:31 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:24:47.473 11:35:31 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:24:47.473 11:35:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:24:47.473 11:35:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:24:47.473 11:35:31 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:24:47.473 11:35:31 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:24:47.473 11:35:31 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:24:47.473 11:35:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:24:47.473 11:35:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:24:47.473 11:35:31 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:24:47.473 11:35:31 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:24:47.473 11:35:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:24:47.473 11:35:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:24:47.473 11:35:31 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:24:47.473 11:35:31 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:24:47.473 11:35:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:24:47.473 11:35:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:24:47.473 11:35:31 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:24:47.473 11:35:31 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:24:47.473 11:35:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:24:47.473 11:35:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:24:47.473 11:35:31 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:24:47.473 11:35:31 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:24:47.473 11:35:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:24:47.473 11:35:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:24:47.473 11:35:31 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:24:47.473 11:35:31 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:24:47.473 11:35:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:24:47.473 11:35:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:24:47.473 11:35:31 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:24:47.473 11:35:31 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:24:47.473 11:35:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:24:47.473 11:35:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:24:47.473 11:35:31 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:24:47.473 11:35:31 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:24:47.473 11:35:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:24:47.473 11:35:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:24:47.473 11:35:31 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:24:47.473 11:35:31 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:24:47.473 11:35:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:24:47.473 11:35:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:24:48.849 11:35:32 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:24:48.849 11:35:32 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:24:48.849 11:35:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:24:48.849 11:35:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:24:48.849 11:35:32 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:24:48.849 11:35:32 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:24:48.849 11:35:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:24:48.849 11:35:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:24:48.849 11:35:32 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:24:48.849 11:35:32 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:24:48.849 11:35:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:24:48.849 11:35:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:24:48.849 11:35:32 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:24:48.849 11:35:32 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:24:48.849 11:35:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:24:48.849 11:35:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:24:48.849 11:35:32 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:24:48.849 11:35:32 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:24:48.849 11:35:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:24:48.849 11:35:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:24:48.849 11:35:32 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:24:48.849 11:35:32 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:24:48.849 11:35:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:24:48.849 11:35:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:24:48.849 11:35:32 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:24:48.849 11:35:32 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:24:48.849 11:35:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:24:48.849 11:35:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:24:48.849 11:35:32 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:24:48.849 11:35:32 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:24:48.849 11:35:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:24:48.849 11:35:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:24:48.849 11:35:32 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:24:48.849 11:35:32 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:24:48.849 11:35:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:24:48.849 11:35:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:24:48.849 11:35:32 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:24:48.849 11:35:32 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:24:48.849 11:35:32 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:48.849 00:24:48.849 real 0m1.403s 00:24:48.849 user 0m4.645s 00:24:48.849 sys 0m0.148s 00:24:48.849 11:35:32 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1125 -- # xtrace_disable 00:24:48.849 11:35:32 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:24:48.849 ************************************ 00:24:48.849 END TEST accel_decomp_full_mcore 00:24:48.849 ************************************ 00:24:48.849 11:35:32 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -T 2 00:24:48.849 11:35:32 accel -- common/autotest_common.sh@1100 -- # '[' 11 -le 1 ']' 00:24:48.849 11:35:32 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:24:48.849 11:35:32 accel -- common/autotest_common.sh@10 -- # set +x 00:24:48.849 ************************************ 00:24:48.849 START TEST accel_decomp_mthread 00:24:48.849 ************************************ 00:24:48.849 11:35:32 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -T 2 00:24:48.849 11:35:32 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:24:48.849 11:35:32 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:24:48.849 11:35:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:24:48.849 11:35:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:24:48.849 11:35:32 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -T 2 00:24:48.849 11:35:32 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:24:48.849 11:35:32 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:24:48.849 11:35:32 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -T 2 00:24:48.849 11:35:32 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:24:48.849 11:35:32 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:24:48.849 11:35:32 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:24:48.849 11:35:32 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:24:48.849 11:35:32 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:24:48.849 11:35:32 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:24:48.849 [2024-06-10 11:35:32.558710] Starting SPDK v24.09-pre git sha1 a3f6419f1 / DPDK 24.03.0 initialization... 00:24:48.849 [2024-06-10 11:35:32.558791] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3326878 ] 00:24:48.849 EAL: No free 2048 kB hugepages reported on node 1 00:24:48.849 [2024-06-10 11:35:32.637130] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:48.849 [2024-06-10 11:35:32.723831] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:24:48.849 11:35:32 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:24:48.849 11:35:32 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:24:48.849 11:35:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:24:48.849 11:35:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:24:48.849 11:35:32 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:24:48.849 11:35:32 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:24:48.849 11:35:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:24:48.849 11:35:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:24:48.849 11:35:32 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:24:48.849 11:35:32 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:24:48.849 11:35:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:24:48.849 11:35:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:24:48.849 11:35:32 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:24:48.849 11:35:32 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:24:48.849 11:35:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:24:48.849 11:35:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:24:48.849 11:35:32 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:24:48.849 11:35:32 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:24:48.849 11:35:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:24:48.849 11:35:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:24:48.849 11:35:32 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:24:48.849 11:35:32 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:24:48.849 11:35:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:24:48.849 11:35:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:24:48.849 11:35:32 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:24:48.849 11:35:32 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:24:48.849 11:35:32 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:24:48.849 11:35:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:24:48.849 11:35:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:24:48.850 11:35:32 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:24:48.850 11:35:32 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:24:48.850 11:35:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:24:48.850 11:35:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:24:48.850 11:35:32 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:24:48.850 11:35:32 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:24:48.850 11:35:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:24:48.850 11:35:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:24:48.850 11:35:32 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:24:48.850 11:35:32 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:24:48.850 11:35:32 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:24:48.850 11:35:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:24:48.850 11:35:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:24:48.850 11:35:32 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:24:48.850 11:35:32 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:24:48.850 11:35:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:24:48.850 11:35:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:24:48.850 11:35:32 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:24:48.850 11:35:32 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:24:48.850 11:35:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:24:48.850 11:35:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:24:48.850 11:35:32 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:24:48.850 11:35:32 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:24:48.850 11:35:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:24:48.850 11:35:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:24:48.850 11:35:32 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:24:48.850 11:35:32 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:24:48.850 11:35:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:24:48.850 11:35:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:24:48.850 11:35:32 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:24:48.850 11:35:32 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:24:48.850 11:35:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:24:48.850 11:35:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:24:48.850 11:35:32 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:24:48.850 11:35:32 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:24:48.850 11:35:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:24:48.850 11:35:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:24:48.850 11:35:32 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:24:48.850 11:35:32 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:24:48.850 11:35:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:24:48.850 11:35:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:24:48.850 11:35:32 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:24:48.850 11:35:32 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:24:48.850 11:35:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:24:48.850 11:35:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:24:50.227 11:35:33 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:24:50.227 11:35:33 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:24:50.227 11:35:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:24:50.227 11:35:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:24:50.227 11:35:33 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:24:50.227 11:35:33 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:24:50.227 11:35:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:24:50.227 11:35:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:24:50.227 11:35:33 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:24:50.227 11:35:33 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:24:50.227 11:35:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:24:50.227 11:35:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:24:50.227 11:35:33 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:24:50.227 11:35:33 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:24:50.227 11:35:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:24:50.227 11:35:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:24:50.227 11:35:33 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:24:50.227 11:35:33 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:24:50.227 11:35:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:24:50.227 11:35:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:24:50.227 11:35:33 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:24:50.227 11:35:33 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:24:50.227 11:35:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:24:50.227 11:35:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:24:50.227 11:35:33 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:24:50.227 11:35:33 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:24:50.227 11:35:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:24:50.227 11:35:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:24:50.227 11:35:33 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:24:50.227 11:35:33 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:24:50.227 11:35:33 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:50.227 00:24:50.227 real 0m1.391s 00:24:50.227 user 0m1.251s 00:24:50.227 sys 0m0.154s 00:24:50.227 11:35:33 accel.accel_decomp_mthread -- common/autotest_common.sh@1125 -- # xtrace_disable 00:24:50.227 11:35:33 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:24:50.227 ************************************ 00:24:50.227 END TEST accel_decomp_mthread 00:24:50.227 ************************************ 00:24:50.227 11:35:33 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:24:50.227 11:35:33 accel -- common/autotest_common.sh@1100 -- # '[' 13 -le 1 ']' 00:24:50.227 11:35:33 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:24:50.227 11:35:33 accel -- common/autotest_common.sh@10 -- # set +x 00:24:50.227 ************************************ 00:24:50.227 START TEST accel_decomp_full_mthread 00:24:50.227 ************************************ 00:24:50.227 11:35:34 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:24:50.227 11:35:34 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:24:50.227 11:35:34 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:24:50.227 11:35:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:24:50.227 11:35:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:24:50.227 11:35:34 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:24:50.227 11:35:34 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:24:50.227 11:35:34 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:24:50.227 11:35:34 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:24:50.227 11:35:34 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:24:50.227 11:35:34 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:24:50.227 11:35:34 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:24:50.227 11:35:34 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:24:50.227 11:35:34 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:24:50.227 11:35:34 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:24:50.227 [2024-06-10 11:35:34.027567] Starting SPDK v24.09-pre git sha1 a3f6419f1 / DPDK 24.03.0 initialization... 00:24:50.227 [2024-06-10 11:35:34.027644] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3327118 ] 00:24:50.227 EAL: No free 2048 kB hugepages reported on node 1 00:24:50.227 [2024-06-10 11:35:34.109054] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:50.487 [2024-06-10 11:35:34.197670] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:24:50.487 11:35:34 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:24:50.487 11:35:34 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:24:50.487 11:35:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:24:50.487 11:35:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:24:50.487 11:35:34 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:24:50.487 11:35:34 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:24:50.487 11:35:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:24:50.487 11:35:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:24:50.487 11:35:34 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:24:50.487 11:35:34 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:24:50.487 11:35:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:24:50.487 11:35:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:24:50.487 11:35:34 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:24:50.487 11:35:34 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:24:50.487 11:35:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:24:50.487 11:35:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:24:50.487 11:35:34 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:24:50.487 11:35:34 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:24:50.487 11:35:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:24:50.487 11:35:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:24:50.487 11:35:34 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:24:50.487 11:35:34 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:24:50.487 11:35:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:24:50.487 11:35:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:24:50.487 11:35:34 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:24:50.487 11:35:34 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:24:50.487 11:35:34 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:24:50.487 11:35:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:24:50.487 11:35:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:24:50.487 11:35:34 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:24:50.487 11:35:34 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:24:50.487 11:35:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:24:50.487 11:35:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:24:50.487 11:35:34 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:24:50.487 11:35:34 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:24:50.487 11:35:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:24:50.487 11:35:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:24:50.487 11:35:34 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:24:50.487 11:35:34 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:24:50.487 11:35:34 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:24:50.487 11:35:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:24:50.487 11:35:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:24:50.487 11:35:34 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:24:50.487 11:35:34 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:24:50.487 11:35:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:24:50.487 11:35:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:24:50.487 11:35:34 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:24:50.487 11:35:34 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:24:50.487 11:35:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:24:50.487 11:35:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:24:50.487 11:35:34 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:24:50.487 11:35:34 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:24:50.487 11:35:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:24:50.487 11:35:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:24:50.487 11:35:34 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:24:50.487 11:35:34 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:24:50.487 11:35:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:24:50.487 11:35:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:24:50.487 11:35:34 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:24:50.487 11:35:34 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:24:50.487 11:35:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:24:50.487 11:35:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:24:50.487 11:35:34 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:24:50.487 11:35:34 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:24:50.487 11:35:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:24:50.487 11:35:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:24:50.487 11:35:34 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:24:50.487 11:35:34 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:24:50.487 11:35:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:24:50.487 11:35:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:24:50.487 11:35:34 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:24:50.487 11:35:34 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:24:50.487 11:35:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:24:50.487 11:35:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:24:51.865 11:35:35 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:24:51.865 11:35:35 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:24:51.865 11:35:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:24:51.865 11:35:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:24:51.865 11:35:35 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:24:51.865 11:35:35 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:24:51.865 11:35:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:24:51.865 11:35:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:24:51.865 11:35:35 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:24:51.865 11:35:35 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:24:51.865 11:35:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:24:51.865 11:35:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:24:51.865 11:35:35 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:24:51.865 11:35:35 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:24:51.865 11:35:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:24:51.865 11:35:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:24:51.865 11:35:35 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:24:51.865 11:35:35 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:24:51.865 11:35:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:24:51.865 11:35:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:24:51.865 11:35:35 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:24:51.865 11:35:35 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:24:51.865 11:35:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:24:51.865 11:35:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:24:51.865 11:35:35 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:24:51.866 11:35:35 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:24:51.866 11:35:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:24:51.866 11:35:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:24:51.866 11:35:35 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:24:51.866 11:35:35 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:24:51.866 11:35:35 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:51.866 00:24:51.866 real 0m1.423s 00:24:51.866 user 0m1.290s 00:24:51.866 sys 0m0.145s 00:24:51.866 11:35:35 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1125 -- # xtrace_disable 00:24:51.866 11:35:35 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:24:51.866 ************************************ 00:24:51.866 END TEST accel_decomp_full_mthread 00:24:51.866 ************************************ 00:24:51.866 11:35:35 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:24:51.866 11:35:35 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:24:51.866 11:35:35 accel -- accel/accel.sh@137 -- # build_accel_config 00:24:51.866 11:35:35 accel -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:24:51.866 11:35:35 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:24:51.866 11:35:35 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:24:51.866 11:35:35 accel -- common/autotest_common.sh@10 -- # set +x 00:24:51.866 11:35:35 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:24:51.866 11:35:35 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:24:51.866 11:35:35 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:24:51.866 11:35:35 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:24:51.866 11:35:35 accel -- accel/accel.sh@40 -- # local IFS=, 00:24:51.866 11:35:35 accel -- accel/accel.sh@41 -- # jq -r . 00:24:51.866 ************************************ 00:24:51.866 START TEST accel_dif_functional_tests 00:24:51.866 ************************************ 00:24:51.866 11:35:35 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:24:51.866 [2024-06-10 11:35:35.516311] Starting SPDK v24.09-pre git sha1 a3f6419f1 / DPDK 24.03.0 initialization... 00:24:51.866 [2024-06-10 11:35:35.516355] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3327378 ] 00:24:51.866 EAL: No free 2048 kB hugepages reported on node 1 00:24:51.866 [2024-06-10 11:35:35.590819] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:51.866 [2024-06-10 11:35:35.679754] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:24:51.866 [2024-06-10 11:35:35.679841] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:24:51.866 [2024-06-10 11:35:35.679843] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:24:51.866 00:24:51.866 00:24:51.866 CUnit - A unit testing framework for C - Version 2.1-3 00:24:51.866 http://cunit.sourceforge.net/ 00:24:51.866 00:24:51.866 00:24:51.866 Suite: accel_dif 00:24:51.866 Test: verify: DIF generated, GUARD check ...passed 00:24:51.866 Test: verify: DIF generated, APPTAG check ...passed 00:24:51.866 Test: verify: DIF generated, REFTAG check ...passed 00:24:51.866 Test: verify: DIF not generated, GUARD check ...[2024-06-10 11:35:35.759439] dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:24:51.866 passed 00:24:51.866 Test: verify: DIF not generated, APPTAG check ...[2024-06-10 11:35:35.759496] dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:24:51.866 passed 00:24:51.866 Test: verify: DIF not generated, REFTAG check ...[2024-06-10 11:35:35.759539] dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:24:51.866 passed 00:24:51.866 Test: verify: APPTAG correct, APPTAG check ...passed 00:24:51.866 Test: verify: APPTAG incorrect, APPTAG check ...[2024-06-10 11:35:35.759590] dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:24:51.866 passed 00:24:51.866 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:24:51.866 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:24:51.866 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:24:51.866 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-06-10 11:35:35.759691] dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:24:51.866 passed 00:24:51.866 Test: verify copy: DIF generated, GUARD check ...passed 00:24:51.866 Test: verify copy: DIF generated, APPTAG check ...passed 00:24:51.866 Test: verify copy: DIF generated, REFTAG check ...passed 00:24:51.866 Test: verify copy: DIF not generated, GUARD check ...[2024-06-10 11:35:35.759815] dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:24:51.866 passed 00:24:51.866 Test: verify copy: DIF not generated, APPTAG check ...[2024-06-10 11:35:35.759842] dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:24:51.866 passed 00:24:51.866 Test: verify copy: DIF not generated, REFTAG check ...[2024-06-10 11:35:35.759870] dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:24:51.866 passed 00:24:51.866 Test: generate copy: DIF generated, GUARD check ...passed 00:24:51.866 Test: generate copy: DIF generated, APTTAG check ...passed 00:24:51.866 Test: generate copy: DIF generated, REFTAG check ...passed 00:24:51.866 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:24:51.866 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:24:51.866 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:24:51.866 Test: generate copy: iovecs-len validate ...[2024-06-10 11:35:35.760054] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:24:51.866 passed 00:24:51.866 Test: generate copy: buffer alignment validate ...passed 00:24:51.866 00:24:51.866 Run Summary: Type Total Ran Passed Failed Inactive 00:24:51.866 suites 1 1 n/a 0 0 00:24:51.866 tests 26 26 26 0 0 00:24:51.866 asserts 115 115 115 0 n/a 00:24:51.866 00:24:51.866 Elapsed time = 0.002 seconds 00:24:52.125 00:24:52.125 real 0m0.446s 00:24:52.125 user 0m0.650s 00:24:52.125 sys 0m0.165s 00:24:52.125 11:35:35 accel.accel_dif_functional_tests -- common/autotest_common.sh@1125 -- # xtrace_disable 00:24:52.125 11:35:35 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:24:52.125 ************************************ 00:24:52.125 END TEST accel_dif_functional_tests 00:24:52.125 ************************************ 00:24:52.125 00:24:52.125 real 0m32.351s 00:24:52.125 user 0m35.289s 00:24:52.125 sys 0m5.203s 00:24:52.125 11:35:35 accel -- common/autotest_common.sh@1125 -- # xtrace_disable 00:24:52.125 11:35:35 accel -- common/autotest_common.sh@10 -- # set +x 00:24:52.125 ************************************ 00:24:52.125 END TEST accel 00:24:52.125 ************************************ 00:24:52.125 11:35:36 -- spdk/autotest.sh@184 -- # run_test accel_rpc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/accel_rpc.sh 00:24:52.125 11:35:36 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:24:52.125 11:35:36 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:24:52.125 11:35:36 -- common/autotest_common.sh@10 -- # set +x 00:24:52.125 ************************************ 00:24:52.125 START TEST accel_rpc 00:24:52.125 ************************************ 00:24:52.125 11:35:36 accel_rpc -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/accel_rpc.sh 00:24:52.384 * Looking for test storage... 00:24:52.384 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel 00:24:52.384 11:35:36 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:24:52.384 11:35:36 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=3327447 00:24:52.384 11:35:36 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:24:52.384 11:35:36 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 3327447 00:24:52.384 11:35:36 accel_rpc -- common/autotest_common.sh@830 -- # '[' -z 3327447 ']' 00:24:52.384 11:35:36 accel_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:52.385 11:35:36 accel_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:24:52.385 11:35:36 accel_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:52.385 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:52.385 11:35:36 accel_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:24:52.385 11:35:36 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:52.385 [2024-06-10 11:35:36.184228] Starting SPDK v24.09-pre git sha1 a3f6419f1 / DPDK 24.03.0 initialization... 00:24:52.385 [2024-06-10 11:35:36.184310] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3327447 ] 00:24:52.385 EAL: No free 2048 kB hugepages reported on node 1 00:24:52.385 [2024-06-10 11:35:36.271615] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:52.644 [2024-06-10 11:35:36.377193] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:24:53.211 11:35:37 accel_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:24:53.211 11:35:37 accel_rpc -- common/autotest_common.sh@863 -- # return 0 00:24:53.211 11:35:37 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:24:53.211 11:35:37 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:24:53.211 11:35:37 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:24:53.211 11:35:37 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:24:53.211 11:35:37 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:24:53.211 11:35:37 accel_rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:24:53.211 11:35:37 accel_rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:24:53.211 11:35:37 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:53.211 ************************************ 00:24:53.211 START TEST accel_assign_opcode 00:24:53.211 ************************************ 00:24:53.211 11:35:37 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # accel_assign_opcode_test_suite 00:24:53.211 11:35:37 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:24:53.211 11:35:37 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:53.211 11:35:37 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:24:53.211 [2024-06-10 11:35:37.047269] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:24:53.211 11:35:37 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:53.211 11:35:37 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:24:53.211 11:35:37 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:53.211 11:35:37 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:24:53.211 [2024-06-10 11:35:37.055288] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:24:53.211 11:35:37 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:53.211 11:35:37 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:24:53.211 11:35:37 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:53.211 11:35:37 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:24:53.499 11:35:37 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:53.499 11:35:37 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:24:53.499 11:35:37 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:24:53.499 11:35:37 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:53.499 11:35:37 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:24:53.499 11:35:37 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:24:53.499 11:35:37 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:53.499 software 00:24:53.499 00:24:53.499 real 0m0.256s 00:24:53.499 user 0m0.040s 00:24:53.499 sys 0m0.017s 00:24:53.499 11:35:37 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1125 -- # xtrace_disable 00:24:53.499 11:35:37 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:24:53.499 ************************************ 00:24:53.499 END TEST accel_assign_opcode 00:24:53.499 ************************************ 00:24:53.499 11:35:37 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 3327447 00:24:53.499 11:35:37 accel_rpc -- common/autotest_common.sh@949 -- # '[' -z 3327447 ']' 00:24:53.499 11:35:37 accel_rpc -- common/autotest_common.sh@953 -- # kill -0 3327447 00:24:53.499 11:35:37 accel_rpc -- common/autotest_common.sh@954 -- # uname 00:24:53.499 11:35:37 accel_rpc -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:24:53.499 11:35:37 accel_rpc -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3327447 00:24:53.499 11:35:37 accel_rpc -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:24:53.499 11:35:37 accel_rpc -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:24:53.499 11:35:37 accel_rpc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3327447' 00:24:53.499 killing process with pid 3327447 00:24:53.499 11:35:37 accel_rpc -- common/autotest_common.sh@968 -- # kill 3327447 00:24:53.499 11:35:37 accel_rpc -- common/autotest_common.sh@973 -- # wait 3327447 00:24:54.069 00:24:54.069 real 0m1.647s 00:24:54.069 user 0m1.648s 00:24:54.070 sys 0m0.509s 00:24:54.070 11:35:37 accel_rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:24:54.070 11:35:37 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:54.070 ************************************ 00:24:54.070 END TEST accel_rpc 00:24:54.070 ************************************ 00:24:54.070 11:35:37 -- spdk/autotest.sh@185 -- # run_test app_cmdline /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/cmdline.sh 00:24:54.070 11:35:37 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:24:54.070 11:35:37 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:24:54.070 11:35:37 -- common/autotest_common.sh@10 -- # set +x 00:24:54.070 ************************************ 00:24:54.070 START TEST app_cmdline 00:24:54.070 ************************************ 00:24:54.070 11:35:37 app_cmdline -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/cmdline.sh 00:24:54.070 * Looking for test storage... 00:24:54.070 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:24:54.070 11:35:37 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:24:54.070 11:35:37 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=3327714 00:24:54.070 11:35:37 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 3327714 00:24:54.070 11:35:37 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:24:54.070 11:35:37 app_cmdline -- common/autotest_common.sh@830 -- # '[' -z 3327714 ']' 00:24:54.070 11:35:37 app_cmdline -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:54.070 11:35:37 app_cmdline -- common/autotest_common.sh@835 -- # local max_retries=100 00:24:54.070 11:35:37 app_cmdline -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:54.070 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:54.070 11:35:37 app_cmdline -- common/autotest_common.sh@839 -- # xtrace_disable 00:24:54.070 11:35:37 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:24:54.070 [2024-06-10 11:35:37.933986] Starting SPDK v24.09-pre git sha1 a3f6419f1 / DPDK 24.03.0 initialization... 00:24:54.070 [2024-06-10 11:35:37.934062] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3327714 ] 00:24:54.070 EAL: No free 2048 kB hugepages reported on node 1 00:24:54.070 [2024-06-10 11:35:38.014308] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:54.329 [2024-06-10 11:35:38.110167] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:24:54.904 11:35:38 app_cmdline -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:24:54.904 11:35:38 app_cmdline -- common/autotest_common.sh@863 -- # return 0 00:24:54.904 11:35:38 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:24:55.163 { 00:24:55.163 "version": "SPDK v24.09-pre git sha1 a3f6419f1", 00:24:55.163 "fields": { 00:24:55.163 "major": 24, 00:24:55.163 "minor": 9, 00:24:55.163 "patch": 0, 00:24:55.163 "suffix": "-pre", 00:24:55.163 "commit": "a3f6419f1" 00:24:55.163 } 00:24:55.163 } 00:24:55.163 11:35:38 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:24:55.163 11:35:38 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:24:55.163 11:35:38 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:24:55.163 11:35:38 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:24:55.163 11:35:38 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:24:55.163 11:35:38 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:24:55.163 11:35:38 app_cmdline -- app/cmdline.sh@26 -- # sort 00:24:55.163 11:35:38 app_cmdline -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:55.163 11:35:38 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:24:55.163 11:35:38 app_cmdline -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:55.163 11:35:38 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:24:55.163 11:35:38 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:24:55.163 11:35:38 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:24:55.163 11:35:38 app_cmdline -- common/autotest_common.sh@649 -- # local es=0 00:24:55.163 11:35:38 app_cmdline -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:24:55.163 11:35:38 app_cmdline -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:24:55.163 11:35:38 app_cmdline -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:24:55.163 11:35:38 app_cmdline -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:24:55.163 11:35:38 app_cmdline -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:24:55.163 11:35:38 app_cmdline -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:24:55.163 11:35:38 app_cmdline -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:24:55.163 11:35:38 app_cmdline -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:24:55.163 11:35:38 app_cmdline -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py ]] 00:24:55.163 11:35:38 app_cmdline -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:24:55.422 request: 00:24:55.422 { 00:24:55.422 "method": "env_dpdk_get_mem_stats", 00:24:55.422 "req_id": 1 00:24:55.422 } 00:24:55.422 Got JSON-RPC error response 00:24:55.422 response: 00:24:55.422 { 00:24:55.422 "code": -32601, 00:24:55.422 "message": "Method not found" 00:24:55.422 } 00:24:55.422 11:35:39 app_cmdline -- common/autotest_common.sh@652 -- # es=1 00:24:55.422 11:35:39 app_cmdline -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:24:55.422 11:35:39 app_cmdline -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:24:55.422 11:35:39 app_cmdline -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:24:55.422 11:35:39 app_cmdline -- app/cmdline.sh@1 -- # killprocess 3327714 00:24:55.422 11:35:39 app_cmdline -- common/autotest_common.sh@949 -- # '[' -z 3327714 ']' 00:24:55.422 11:35:39 app_cmdline -- common/autotest_common.sh@953 -- # kill -0 3327714 00:24:55.422 11:35:39 app_cmdline -- common/autotest_common.sh@954 -- # uname 00:24:55.422 11:35:39 app_cmdline -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:24:55.422 11:35:39 app_cmdline -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3327714 00:24:55.423 11:35:39 app_cmdline -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:24:55.423 11:35:39 app_cmdline -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:24:55.423 11:35:39 app_cmdline -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3327714' 00:24:55.423 killing process with pid 3327714 00:24:55.423 11:35:39 app_cmdline -- common/autotest_common.sh@968 -- # kill 3327714 00:24:55.423 11:35:39 app_cmdline -- common/autotest_common.sh@973 -- # wait 3327714 00:24:55.682 00:24:55.682 real 0m1.754s 00:24:55.682 user 0m2.024s 00:24:55.682 sys 0m0.495s 00:24:55.682 11:35:39 app_cmdline -- common/autotest_common.sh@1125 -- # xtrace_disable 00:24:55.682 11:35:39 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:24:55.682 ************************************ 00:24:55.682 END TEST app_cmdline 00:24:55.682 ************************************ 00:24:55.682 11:35:39 -- spdk/autotest.sh@186 -- # run_test version /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/version.sh 00:24:55.682 11:35:39 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:24:55.682 11:35:39 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:24:55.682 11:35:39 -- common/autotest_common.sh@10 -- # set +x 00:24:55.942 ************************************ 00:24:55.942 START TEST version 00:24:55.942 ************************************ 00:24:55.942 11:35:39 version -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/version.sh 00:24:55.942 * Looking for test storage... 00:24:55.942 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:24:55.942 11:35:39 version -- app/version.sh@17 -- # get_header_version major 00:24:55.942 11:35:39 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:24:55.942 11:35:39 version -- app/version.sh@14 -- # tr -d '"' 00:24:55.942 11:35:39 version -- app/version.sh@14 -- # cut -f2 00:24:55.942 11:35:39 version -- app/version.sh@17 -- # major=24 00:24:55.942 11:35:39 version -- app/version.sh@18 -- # get_header_version minor 00:24:55.942 11:35:39 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:24:55.942 11:35:39 version -- app/version.sh@14 -- # cut -f2 00:24:55.942 11:35:39 version -- app/version.sh@14 -- # tr -d '"' 00:24:55.942 11:35:39 version -- app/version.sh@18 -- # minor=9 00:24:55.942 11:35:39 version -- app/version.sh@19 -- # get_header_version patch 00:24:55.942 11:35:39 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:24:55.942 11:35:39 version -- app/version.sh@14 -- # tr -d '"' 00:24:55.942 11:35:39 version -- app/version.sh@14 -- # cut -f2 00:24:55.942 11:35:39 version -- app/version.sh@19 -- # patch=0 00:24:55.942 11:35:39 version -- app/version.sh@20 -- # get_header_version suffix 00:24:55.942 11:35:39 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:24:55.942 11:35:39 version -- app/version.sh@14 -- # tr -d '"' 00:24:55.942 11:35:39 version -- app/version.sh@14 -- # cut -f2 00:24:55.942 11:35:39 version -- app/version.sh@20 -- # suffix=-pre 00:24:55.942 11:35:39 version -- app/version.sh@22 -- # version=24.9 00:24:55.942 11:35:39 version -- app/version.sh@25 -- # (( patch != 0 )) 00:24:55.942 11:35:39 version -- app/version.sh@28 -- # version=24.9rc0 00:24:55.942 11:35:39 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:24:55.942 11:35:39 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:24:55.942 11:35:39 version -- app/version.sh@30 -- # py_version=24.9rc0 00:24:55.942 11:35:39 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:24:55.942 00:24:55.942 real 0m0.190s 00:24:55.942 user 0m0.086s 00:24:55.942 sys 0m0.148s 00:24:55.942 11:35:39 version -- common/autotest_common.sh@1125 -- # xtrace_disable 00:24:55.942 11:35:39 version -- common/autotest_common.sh@10 -- # set +x 00:24:55.942 ************************************ 00:24:55.942 END TEST version 00:24:55.942 ************************************ 00:24:55.942 11:35:39 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:24:55.942 11:35:39 -- spdk/autotest.sh@198 -- # uname -s 00:24:55.942 11:35:39 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:24:55.942 11:35:39 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:24:55.942 11:35:39 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:24:55.942 11:35:39 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:24:55.942 11:35:39 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:24:55.942 11:35:39 -- spdk/autotest.sh@260 -- # timing_exit lib 00:24:55.942 11:35:39 -- common/autotest_common.sh@729 -- # xtrace_disable 00:24:55.942 11:35:39 -- common/autotest_common.sh@10 -- # set +x 00:24:56.201 11:35:39 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:24:56.201 11:35:39 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:24:56.201 11:35:39 -- spdk/autotest.sh@279 -- # '[' 0 -eq 1 ']' 00:24:56.201 11:35:39 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:24:56.201 11:35:39 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:24:56.201 11:35:39 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:24:56.201 11:35:39 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:24:56.201 11:35:39 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:24:56.201 11:35:39 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:24:56.201 11:35:39 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:24:56.201 11:35:39 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:24:56.201 11:35:39 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:24:56.201 11:35:39 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:24:56.201 11:35:39 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:24:56.201 11:35:39 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:24:56.201 11:35:39 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:24:56.201 11:35:39 -- spdk/autotest.sh@371 -- # [[ 1 -eq 1 ]] 00:24:56.201 11:35:39 -- spdk/autotest.sh@372 -- # run_test llvm_fuzz /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm.sh 00:24:56.201 11:35:39 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:24:56.201 11:35:39 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:24:56.201 11:35:39 -- common/autotest_common.sh@10 -- # set +x 00:24:56.201 ************************************ 00:24:56.201 START TEST llvm_fuzz 00:24:56.201 ************************************ 00:24:56.201 11:35:39 llvm_fuzz -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm.sh 00:24:56.201 * Looking for test storage... 00:24:56.201 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz 00:24:56.201 11:35:40 llvm_fuzz -- fuzz/llvm.sh@11 -- # fuzzers=($(get_fuzzer_targets)) 00:24:56.201 11:35:40 llvm_fuzz -- fuzz/llvm.sh@11 -- # get_fuzzer_targets 00:24:56.201 11:35:40 llvm_fuzz -- common/autotest_common.sh@547 -- # fuzzers=() 00:24:56.201 11:35:40 llvm_fuzz -- common/autotest_common.sh@547 -- # local fuzzers 00:24:56.201 11:35:40 llvm_fuzz -- common/autotest_common.sh@549 -- # [[ -n '' ]] 00:24:56.201 11:35:40 llvm_fuzz -- common/autotest_common.sh@552 -- # fuzzers=("$rootdir/test/fuzz/llvm/"*) 00:24:56.201 11:35:40 llvm_fuzz -- common/autotest_common.sh@553 -- # fuzzers=("${fuzzers[@]##*/}") 00:24:56.201 11:35:40 llvm_fuzz -- common/autotest_common.sh@556 -- # echo 'common.sh llvm-gcov.sh nvmf vfio' 00:24:56.201 11:35:40 llvm_fuzz -- fuzz/llvm.sh@13 -- # llvm_out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm 00:24:56.201 11:35:40 llvm_fuzz -- fuzz/llvm.sh@15 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/coverage 00:24:56.201 11:35:40 llvm_fuzz -- fuzz/llvm.sh@56 -- # [[ 1 -eq 0 ]] 00:24:56.201 11:35:40 llvm_fuzz -- fuzz/llvm.sh@60 -- # for fuzzer in "${fuzzers[@]}" 00:24:56.201 11:35:40 llvm_fuzz -- fuzz/llvm.sh@61 -- # case "$fuzzer" in 00:24:56.201 11:35:40 llvm_fuzz -- fuzz/llvm.sh@60 -- # for fuzzer in "${fuzzers[@]}" 00:24:56.201 11:35:40 llvm_fuzz -- fuzz/llvm.sh@61 -- # case "$fuzzer" in 00:24:56.201 11:35:40 llvm_fuzz -- fuzz/llvm.sh@60 -- # for fuzzer in "${fuzzers[@]}" 00:24:56.201 11:35:40 llvm_fuzz -- fuzz/llvm.sh@61 -- # case "$fuzzer" in 00:24:56.201 11:35:40 llvm_fuzz -- fuzz/llvm.sh@62 -- # run_test nvmf_fuzz /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/run.sh 00:24:56.201 11:35:40 llvm_fuzz -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:24:56.201 11:35:40 llvm_fuzz -- common/autotest_common.sh@1106 -- # xtrace_disable 00:24:56.201 11:35:40 llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:56.201 ************************************ 00:24:56.201 START TEST nvmf_fuzz 00:24:56.201 ************************************ 00:24:56.201 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/run.sh 00:24:56.463 * Looking for test storage... 00:24:56.463 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:24:56.463 11:35:40 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@60 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/common.sh 00:24:56.463 11:35:40 llvm_fuzz.nvmf_fuzz -- setup/common.sh@6 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh 00:24:56.463 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:24:56.463 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@34 -- # set -e 00:24:56.463 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:24:56.463 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@36 -- # shopt -s extglob 00:24:56.463 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:24:56.463 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output ']' 00:24:56.463 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh ]] 00:24:56.463 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh 00:24:56.463 11:35:40 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:24:56.463 11:35:40 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:24:56.463 11:35:40 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:24:56.463 11:35:40 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:24:56.463 11:35:40 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:24:56.463 11:35:40 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:24:56.463 11:35:40 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:24:56.463 11:35:40 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:24:56.463 11:35:40 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:24:56.463 11:35:40 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:24:56.463 11:35:40 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:24:56.463 11:35:40 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:24:56.463 11:35:40 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:24:56.463 11:35:40 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:24:56.463 11:35:40 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:24:56.463 11:35:40 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:24:56.463 11:35:40 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:24:56.463 11:35:40 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:24:56.463 11:35:40 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:24:56.463 11:35:40 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:24:56.463 11:35:40 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:24:56.463 11:35:40 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@22 -- # CONFIG_CET=n 00:24:56.463 11:35:40 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:24:56.463 11:35:40 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:24:56.463 11:35:40 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:24:56.463 11:35:40 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:24:56.463 11:35:40 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:24:56.463 11:35:40 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:24:56.463 11:35:40 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:24:56.463 11:35:40 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:24:56.463 11:35:40 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:24:56.463 11:35:40 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:24:56.463 11:35:40 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:24:56.463 11:35:40 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB=/usr/lib64/clang/16/lib/libclang_rt.fuzzer_no_main-x86_64.a 00:24:56.463 11:35:40 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@35 -- # CONFIG_FUZZER=y 00:24:56.463 11:35:40 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:24:56.463 11:35:40 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:24:56.463 11:35:40 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:24:56.463 11:35:40 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:24:56.463 11:35:40 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:24:56.463 11:35:40 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:24:56.463 11:35:40 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:24:56.463 11:35:40 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:24:56.463 11:35:40 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:24:56.463 11:35:40 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:24:56.463 11:35:40 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:24:56.463 11:35:40 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:24:56.463 11:35:40 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:24:56.463 11:35:40 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:24:56.463 11:35:40 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:24:56.463 11:35:40 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:24:56.463 11:35:40 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:24:56.463 11:35:40 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:24:56.463 11:35:40 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:24:56.463 11:35:40 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:24:56.463 11:35:40 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:24:56.463 11:35:40 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:24:56.463 11:35:40 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:24:56.463 11:35:40 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:24:56.464 11:35:40 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:24:56.464 11:35:40 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:24:56.464 11:35:40 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:24:56.464 11:35:40 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:24:56.464 11:35:40 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:24:56.464 11:35:40 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:24:56.464 11:35:40 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@66 -- # CONFIG_SHARED=n 00:24:56.464 11:35:40 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:24:56.464 11:35:40 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:24:56.464 11:35:40 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:24:56.464 11:35:40 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@70 -- # CONFIG_FC=n 00:24:56.464 11:35:40 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:24:56.464 11:35:40 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:24:56.464 11:35:40 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:24:56.464 11:35:40 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:24:56.464 11:35:40 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:24:56.464 11:35:40 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:24:56.464 11:35:40 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES= 00:24:56.464 11:35:40 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:24:56.464 11:35:40 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:24:56.464 11:35:40 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:24:56.464 11:35:40 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:24:56.464 11:35:40 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:24:56.464 11:35:40 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@83 -- # CONFIG_URING=n 00:24:56.464 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:24:56.464 11:35:40 llvm_fuzz.nvmf_fuzz -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:24:56.464 11:35:40 llvm_fuzz.nvmf_fuzz -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:24:56.464 11:35:40 llvm_fuzz.nvmf_fuzz -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:24:56.464 11:35:40 llvm_fuzz.nvmf_fuzz -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:24:56.464 11:35:40 llvm_fuzz.nvmf_fuzz -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:24:56.464 11:35:40 llvm_fuzz.nvmf_fuzz -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:24:56.464 11:35:40 llvm_fuzz.nvmf_fuzz -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:24:56.464 11:35:40 llvm_fuzz.nvmf_fuzz -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:24:56.464 11:35:40 llvm_fuzz.nvmf_fuzz -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:24:56.464 11:35:40 llvm_fuzz.nvmf_fuzz -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:24:56.464 11:35:40 llvm_fuzz.nvmf_fuzz -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:24:56.464 11:35:40 llvm_fuzz.nvmf_fuzz -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:24:56.464 11:35:40 llvm_fuzz.nvmf_fuzz -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:24:56.464 11:35:40 llvm_fuzz.nvmf_fuzz -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/config.h ]] 00:24:56.464 11:35:40 llvm_fuzz.nvmf_fuzz -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:24:56.464 #define SPDK_CONFIG_H 00:24:56.464 #define SPDK_CONFIG_APPS 1 00:24:56.464 #define SPDK_CONFIG_ARCH native 00:24:56.464 #undef SPDK_CONFIG_ASAN 00:24:56.464 #undef SPDK_CONFIG_AVAHI 00:24:56.464 #undef SPDK_CONFIG_CET 00:24:56.464 #define SPDK_CONFIG_COVERAGE 1 00:24:56.464 #define SPDK_CONFIG_CROSS_PREFIX 00:24:56.464 #undef SPDK_CONFIG_CRYPTO 00:24:56.464 #undef SPDK_CONFIG_CRYPTO_MLX5 00:24:56.464 #undef SPDK_CONFIG_CUSTOMOCF 00:24:56.464 #undef SPDK_CONFIG_DAOS 00:24:56.464 #define SPDK_CONFIG_DAOS_DIR 00:24:56.464 #define SPDK_CONFIG_DEBUG 1 00:24:56.464 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:24:56.464 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:24:56.464 #define SPDK_CONFIG_DPDK_INC_DIR 00:24:56.464 #define SPDK_CONFIG_DPDK_LIB_DIR 00:24:56.464 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:24:56.464 #undef SPDK_CONFIG_DPDK_UADK 00:24:56.464 #define SPDK_CONFIG_ENV /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:24:56.464 #define SPDK_CONFIG_EXAMPLES 1 00:24:56.464 #undef SPDK_CONFIG_FC 00:24:56.464 #define SPDK_CONFIG_FC_PATH 00:24:56.464 #define SPDK_CONFIG_FIO_PLUGIN 1 00:24:56.464 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:24:56.464 #undef SPDK_CONFIG_FUSE 00:24:56.464 #define SPDK_CONFIG_FUZZER 1 00:24:56.464 #define SPDK_CONFIG_FUZZER_LIB /usr/lib64/clang/16/lib/libclang_rt.fuzzer_no_main-x86_64.a 00:24:56.464 #undef SPDK_CONFIG_GOLANG 00:24:56.464 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:24:56.464 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:24:56.464 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:24:56.464 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:24:56.464 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:24:56.464 #undef SPDK_CONFIG_HAVE_LIBBSD 00:24:56.464 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:24:56.464 #define SPDK_CONFIG_IDXD 1 00:24:56.464 #define SPDK_CONFIG_IDXD_KERNEL 1 00:24:56.464 #undef SPDK_CONFIG_IPSEC_MB 00:24:56.464 #define SPDK_CONFIG_IPSEC_MB_DIR 00:24:56.464 #define SPDK_CONFIG_ISAL 1 00:24:56.464 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:24:56.464 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:24:56.464 #define SPDK_CONFIG_LIBDIR 00:24:56.464 #undef SPDK_CONFIG_LTO 00:24:56.464 #define SPDK_CONFIG_MAX_LCORES 00:24:56.464 #define SPDK_CONFIG_NVME_CUSE 1 00:24:56.464 #undef SPDK_CONFIG_OCF 00:24:56.464 #define SPDK_CONFIG_OCF_PATH 00:24:56.464 #define SPDK_CONFIG_OPENSSL_PATH 00:24:56.464 #undef SPDK_CONFIG_PGO_CAPTURE 00:24:56.464 #define SPDK_CONFIG_PGO_DIR 00:24:56.464 #undef SPDK_CONFIG_PGO_USE 00:24:56.464 #define SPDK_CONFIG_PREFIX /usr/local 00:24:56.464 #undef SPDK_CONFIG_RAID5F 00:24:56.464 #undef SPDK_CONFIG_RBD 00:24:56.464 #define SPDK_CONFIG_RDMA 1 00:24:56.464 #define SPDK_CONFIG_RDMA_PROV verbs 00:24:56.464 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:24:56.464 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:24:56.464 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:24:56.464 #undef SPDK_CONFIG_SHARED 00:24:56.464 #undef SPDK_CONFIG_SMA 00:24:56.464 #define SPDK_CONFIG_TESTS 1 00:24:56.464 #undef SPDK_CONFIG_TSAN 00:24:56.464 #define SPDK_CONFIG_UBLK 1 00:24:56.464 #define SPDK_CONFIG_UBSAN 1 00:24:56.464 #undef SPDK_CONFIG_UNIT_TESTS 00:24:56.464 #undef SPDK_CONFIG_URING 00:24:56.464 #define SPDK_CONFIG_URING_PATH 00:24:56.464 #undef SPDK_CONFIG_URING_ZNS 00:24:56.464 #undef SPDK_CONFIG_USDT 00:24:56.464 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:24:56.464 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:24:56.464 #define SPDK_CONFIG_VFIO_USER 1 00:24:56.464 #define SPDK_CONFIG_VFIO_USER_DIR 00:24:56.464 #define SPDK_CONFIG_VHOST 1 00:24:56.464 #define SPDK_CONFIG_VIRTIO 1 00:24:56.464 #undef SPDK_CONFIG_VTUNE 00:24:56.464 #define SPDK_CONFIG_VTUNE_DIR 00:24:56.464 #define SPDK_CONFIG_WERROR 1 00:24:56.464 #define SPDK_CONFIG_WPDK_DIR 00:24:56.464 #undef SPDK_CONFIG_XNVME 00:24:56.464 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:24:56.464 11:35:40 llvm_fuzz.nvmf_fuzz -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:24:56.464 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:24:56.464 11:35:40 llvm_fuzz.nvmf_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:56.464 11:35:40 llvm_fuzz.nvmf_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:56.464 11:35:40 llvm_fuzz.nvmf_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:56.464 11:35:40 llvm_fuzz.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:56.464 11:35:40 llvm_fuzz.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:56.464 11:35:40 llvm_fuzz.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:56.464 11:35:40 llvm_fuzz.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:24:56.464 11:35:40 llvm_fuzz.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:56.464 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:24:56.464 11:35:40 llvm_fuzz.nvmf_fuzz -- pm/common@6 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:24:56.464 11:35:40 llvm_fuzz.nvmf_fuzz -- pm/common@6 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:24:56.464 11:35:40 llvm_fuzz.nvmf_fuzz -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:24:56.464 11:35:40 llvm_fuzz.nvmf_fuzz -- pm/common@7 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/../../../ 00:24:56.464 11:35:40 llvm_fuzz.nvmf_fuzz -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:24:56.464 11:35:40 llvm_fuzz.nvmf_fuzz -- pm/common@64 -- # TEST_TAG=N/A 00:24:56.465 11:35:40 llvm_fuzz.nvmf_fuzz -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.run_test_name 00:24:56.465 11:35:40 llvm_fuzz.nvmf_fuzz -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power 00:24:56.465 11:35:40 llvm_fuzz.nvmf_fuzz -- pm/common@68 -- # uname -s 00:24:56.465 11:35:40 llvm_fuzz.nvmf_fuzz -- pm/common@68 -- # PM_OS=Linux 00:24:56.465 11:35:40 llvm_fuzz.nvmf_fuzz -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:24:56.465 11:35:40 llvm_fuzz.nvmf_fuzz -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:24:56.465 11:35:40 llvm_fuzz.nvmf_fuzz -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:24:56.465 11:35:40 llvm_fuzz.nvmf_fuzz -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:24:56.465 11:35:40 llvm_fuzz.nvmf_fuzz -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:24:56.465 11:35:40 llvm_fuzz.nvmf_fuzz -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:24:56.465 11:35:40 llvm_fuzz.nvmf_fuzz -- pm/common@76 -- # SUDO[0]= 00:24:56.465 11:35:40 llvm_fuzz.nvmf_fuzz -- pm/common@76 -- # SUDO[1]='sudo -E' 00:24:56.465 11:35:40 llvm_fuzz.nvmf_fuzz -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:24:56.465 11:35:40 llvm_fuzz.nvmf_fuzz -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:24:56.465 11:35:40 llvm_fuzz.nvmf_fuzz -- pm/common@81 -- # [[ Linux == Linux ]] 00:24:56.465 11:35:40 llvm_fuzz.nvmf_fuzz -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:24:56.465 11:35:40 llvm_fuzz.nvmf_fuzz -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:24:56.465 11:35:40 llvm_fuzz.nvmf_fuzz -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:24:56.465 11:35:40 llvm_fuzz.nvmf_fuzz -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:24:56.465 11:35:40 llvm_fuzz.nvmf_fuzz -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power ]] 00:24:56.465 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@58 -- # : 0 00:24:56.465 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:24:56.465 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@62 -- # : 0 00:24:56.465 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:24:56.465 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@64 -- # : 0 00:24:56.465 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:24:56.465 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@66 -- # : 1 00:24:56.465 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:24:56.465 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@68 -- # : 0 00:24:56.465 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:24:56.465 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@70 -- # : 00:24:56.465 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:24:56.465 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@72 -- # : 0 00:24:56.465 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:24:56.465 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@74 -- # : 0 00:24:56.465 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:24:56.465 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@76 -- # : 0 00:24:56.465 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:24:56.465 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@78 -- # : 0 00:24:56.465 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:24:56.465 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@80 -- # : 0 00:24:56.465 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:24:56.465 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@82 -- # : 0 00:24:56.465 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:24:56.465 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@84 -- # : 0 00:24:56.465 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:24:56.465 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@86 -- # : 0 00:24:56.465 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:24:56.465 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@88 -- # : 0 00:24:56.465 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:24:56.465 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@90 -- # : 0 00:24:56.465 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:24:56.465 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@92 -- # : 0 00:24:56.465 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:24:56.465 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@94 -- # : 0 00:24:56.465 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:24:56.465 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@96 -- # : 0 00:24:56.465 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:24:56.465 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@98 -- # : 1 00:24:56.465 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:24:56.465 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@100 -- # : 1 00:24:56.465 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:24:56.465 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@102 -- # : rdma 00:24:56.465 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:24:56.465 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@104 -- # : 0 00:24:56.465 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:24:56.465 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@106 -- # : 0 00:24:56.465 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:24:56.465 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@108 -- # : 0 00:24:56.465 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:24:56.465 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@110 -- # : 0 00:24:56.465 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:24:56.465 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@112 -- # : 0 00:24:56.465 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:24:56.465 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@114 -- # : 0 00:24:56.465 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:24:56.465 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@116 -- # : 0 00:24:56.465 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:24:56.465 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@118 -- # : 0 00:24:56.465 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:24:56.465 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@120 -- # : 0 00:24:56.465 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:24:56.465 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@122 -- # : 1 00:24:56.465 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:24:56.465 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@124 -- # : 00:24:56.465 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:24:56.465 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@126 -- # : 0 00:24:56.465 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:24:56.465 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@128 -- # : 0 00:24:56.465 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:24:56.465 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@130 -- # : 0 00:24:56.465 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:24:56.465 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@132 -- # : 0 00:24:56.465 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:24:56.465 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@134 -- # : 0 00:24:56.465 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:24:56.465 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@136 -- # : 0 00:24:56.465 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:24:56.465 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@138 -- # : 00:24:56.465 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:24:56.465 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@140 -- # : true 00:24:56.465 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:24:56.465 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@142 -- # : 0 00:24:56.465 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:24:56.465 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@144 -- # : 0 00:24:56.465 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:24:56.465 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@146 -- # : 0 00:24:56.465 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:24:56.465 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@148 -- # : 0 00:24:56.465 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:24:56.465 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@150 -- # : 0 00:24:56.465 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:24:56.465 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@152 -- # : 0 00:24:56.465 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:24:56.465 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@154 -- # : 00:24:56.465 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:24:56.465 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@156 -- # : 0 00:24:56.465 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:24:56.465 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@158 -- # : 0 00:24:56.465 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:24:56.465 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@160 -- # : 0 00:24:56.465 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:24:56.465 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@162 -- # : 0 00:24:56.465 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:24:56.465 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@164 -- # : 0 00:24:56.465 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:24:56.465 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@167 -- # : 00:24:56.465 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:24:56.466 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@169 -- # : 0 00:24:56.466 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:24:56.466 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@171 -- # : 0 00:24:56.466 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:24:56.466 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:24:56.466 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:24:56.466 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:24:56.466 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:24:56.466 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:24:56.466 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:24:56.466 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:24:56.466 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:24:56.466 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:24:56.466 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:24:56.466 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:24:56.466 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:24:56.466 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:24:56.466 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:24:56.466 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:24:56.466 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:24:56.466 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:24:56.466 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:24:56.466 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:24:56.466 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:24:56.466 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@200 -- # cat 00:24:56.466 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:24:56.466 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:24:56.466 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:24:56.466 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:24:56.466 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:24:56.466 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:24:56.466 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:24:56.466 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:24:56.466 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:24:56.466 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:24:56.466 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:24:56.466 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:24:56.466 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:24:56.466 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:24:56.466 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:24:56.466 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:24:56.466 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:24:56.466 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:24:56.466 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:24:56.466 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:24:56.466 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@263 -- # export valgrind= 00:24:56.466 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@263 -- # valgrind= 00:24:56.466 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@269 -- # uname -s 00:24:56.466 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:24:56.466 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:24:56.466 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:24:56.466 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:24:56.466 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:24:56.466 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:24:56.466 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@279 -- # MAKE=make 00:24:56.466 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j72 00:24:56.466 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:24:56.466 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:24:56.466 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:24:56.466 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@299 -- # TEST_MODE= 00:24:56.466 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@318 -- # [[ -z 3328210 ]] 00:24:56.466 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@318 -- # kill -0 3328210 00:24:56.466 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@1679 -- # set_test_storage 2147483648 00:24:56.466 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:24:56.466 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:24:56.466 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@331 -- # local mount target_dir 00:24:56.466 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:24:56.466 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:24:56.466 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:24:56.466 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:24:56.466 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.XPvkJ8 00:24:56.466 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:24:56.466 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:24:56.466 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:24:56.466 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@355 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf /tmp/spdk.XPvkJ8/tests/nvmf /tmp/spdk.XPvkJ8 00:24:56.466 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:24:56.466 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:24:56.466 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@327 -- # df -T 00:24:56.466 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:24:56.466 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_devtmpfs 00:24:56.466 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:24:56.466 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=67108864 00:24:56.466 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:24:56.466 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:24:56.466 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:24:56.466 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/pmem0 00:24:56.466 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=ext2 00:24:56.466 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=957775872 00:24:56.466 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=5284429824 00:24:56.467 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=4326653952 00:24:56.467 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:24:56.467 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_root 00:24:56.467 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=overlay 00:24:56.467 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=82554945536 00:24:56.467 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=94508429312 00:24:56.467 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=11953483776 00:24:56.467 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:24:56.467 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:24:56.467 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:24:56.467 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=47249502208 00:24:56.467 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=47254212608 00:24:56.467 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=4710400 00:24:56.467 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:24:56.467 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:24:56.467 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:24:56.467 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=18895695872 00:24:56.467 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=18901688320 00:24:56.467 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=5992448 00:24:56.467 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:24:56.467 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:24:56.467 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:24:56.467 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=47253446656 00:24:56.467 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=47254216704 00:24:56.467 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=770048 00:24:56.467 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:24:56.467 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:24:56.467 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:24:56.467 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=9450835968 00:24:56.467 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=9450840064 00:24:56.467 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:24:56.467 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:24:56.467 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:24:56.467 * Looking for test storage... 00:24:56.467 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@368 -- # local target_space new_size 00:24:56.467 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:24:56.467 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@372 -- # df /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:24:56.467 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:24:56.467 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@372 -- # mount=/ 00:24:56.467 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@374 -- # target_space=82554945536 00:24:56.467 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:24:56.467 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:24:56.467 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@380 -- # [[ overlay == tmpfs ]] 00:24:56.467 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@380 -- # [[ overlay == ramfs ]] 00:24:56.467 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:24:56.467 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@381 -- # new_size=14168076288 00:24:56.467 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:24:56.467 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:24:56.467 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:24:56.467 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:24:56.467 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:24:56.467 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@389 -- # return 0 00:24:56.467 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@1681 -- # set -o errtrace 00:24:56.467 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@1682 -- # shopt -s extdebug 00:24:56.467 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@1683 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:24:56.467 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@1685 -- # PS4=' \t $test_domain -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:24:56.467 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@1686 -- # true 00:24:56.467 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@1688 -- # xtrace_fd 00:24:56.467 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:24:56.467 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:24:56.467 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@27 -- # exec 00:24:56.467 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@29 -- # exec 00:24:56.467 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@31 -- # xtrace_restore 00:24:56.467 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:24:56.467 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:24:56.467 11:35:40 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@18 -- # set -x 00:24:56.467 11:35:40 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@61 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/../common.sh 00:24:56.467 11:35:40 llvm_fuzz.nvmf_fuzz -- ../common.sh@8 -- # pids=() 00:24:56.467 11:35:40 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@63 -- # fuzzfile=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c 00:24:56.467 11:35:40 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@64 -- # grep -c '\.fn =' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c 00:24:56.467 11:35:40 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@64 -- # fuzz_num=25 00:24:56.467 11:35:40 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@65 -- # (( fuzz_num != 0 )) 00:24:56.467 11:35:40 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@67 -- # trap 'cleanup /tmp/llvm_fuzz* /var/tmp/suppress_nvmf_fuzz; exit 1' SIGINT SIGTERM EXIT 00:24:56.467 11:35:40 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@69 -- # mem_size=512 00:24:56.467 11:35:40 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@70 -- # [[ 1 -eq 1 ]] 00:24:56.467 11:35:40 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@71 -- # start_llvm_fuzz_short 25 1 00:24:56.467 11:35:40 llvm_fuzz.nvmf_fuzz -- ../common.sh@69 -- # local fuzz_num=25 00:24:56.467 11:35:40 llvm_fuzz.nvmf_fuzz -- ../common.sh@70 -- # local time=1 00:24:56.467 11:35:40 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i = 0 )) 00:24:56.467 11:35:40 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:24:56.467 11:35:40 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 0 1 0x1 00:24:56.467 11:35:40 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=0 00:24:56.467 11:35:40 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:24:56.467 11:35:40 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:24:56.467 11:35:40 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 00:24:56.467 11:35:40 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_0.conf 00:24:56.467 11:35:40 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:24:56.467 11:35:40 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:24:56.467 11:35:40 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 0 00:24:56.467 11:35:40 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4400 00:24:56.467 11:35:40 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 00:24:56.467 11:35:40 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4400' 00:24:56.467 11:35:40 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4400"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:24:56.467 11:35:40 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:24:56.467 11:35:40 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:24:56.467 11:35:40 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4400' -c /tmp/fuzz_json_0.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 -Z 0 00:24:56.467 [2024-06-10 11:35:40.400066] Starting SPDK v24.09-pre git sha1 a3f6419f1 / DPDK 24.03.0 initialization... 00:24:56.467 [2024-06-10 11:35:40.400151] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3328251 ] 00:24:56.727 EAL: No free 2048 kB hugepages reported on node 1 00:24:56.986 [2024-06-10 11:35:40.738533] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:56.986 [2024-06-10 11:35:40.834635] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:24:56.986 [2024-06-10 11:35:40.895367] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:56.986 [2024-06-10 11:35:40.911648] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4400 *** 00:24:56.986 INFO: Running with entropic power schedule (0xFF, 100). 00:24:56.986 INFO: Seed: 2481817929 00:24:57.245 INFO: Loaded 1 modules (357716 inline 8-bit counters): 357716 [0x29a784c, 0x29feda0), 00:24:57.245 INFO: Loaded 1 PC tables (357716 PCs): 357716 [0x29feda0,0x2f742e0), 00:24:57.245 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 00:24:57.245 INFO: A corpus is not provided, starting from an empty corpus 00:24:57.245 #2 INITED exec/s: 0 rss: 64Mb 00:24:57.245 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:24:57.245 This may also happen if the target rejected all inputs we tried so far 00:24:57.245 [2024-06-10 11:35:40.966879] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:24:57.245 [2024-06-10 11:35:40.966916] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:57.504 NEW_FUNC[1/687]: 0x482e80 in fuzz_admin_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:47 00:24:57.504 NEW_FUNC[2/687]: 0x4bf580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:24:57.504 #23 NEW cov: 11822 ft: 11823 corp: 2/99b lim: 320 exec/s: 0 rss: 72Mb L: 98/98 MS: 1 InsertRepeatedBytes- 00:24:57.504 [2024-06-10 11:35:41.339899] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:24:57.504 [2024-06-10 11:35:41.339944] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:57.504 [2024-06-10 11:35:41.340062] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:5 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:24:57.504 [2024-06-10 11:35:41.340080] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:57.504 #29 NEW cov: 11952 ft: 12698 corp: 3/259b lim: 320 exec/s: 0 rss: 72Mb L: 160/160 MS: 1 CopyPart- 00:24:57.504 [2024-06-10 11:35:41.410529] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:06060606 cdw11:06060606 SGL TRANSPORT DATA BLOCK TRANSPORT 0x606060606060606 00:24:57.504 [2024-06-10 11:35:41.410556] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:57.504 [2024-06-10 11:35:41.410761] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:ffff0606 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:24:57.504 [2024-06-10 11:35:41.410778] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:57.504 NEW_FUNC[1/1]: 0x116cba0 in spdk_nvmf_ctrlr_identify_iocs_specific /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/ctrlr.c:3033 00:24:57.504 #35 NEW cov: 11994 ft: 13208 corp: 4/477b lim: 320 exec/s: 0 rss: 72Mb L: 218/218 MS: 1 InsertRepeatedBytes- 00:24:57.764 [2024-06-10 11:35:41.470547] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:24:57.764 [2024-06-10 11:35:41.470577] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:57.764 [2024-06-10 11:35:41.470675] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:5 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:24:57.764 [2024-06-10 11:35:41.470692] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:57.764 #36 NEW cov: 12079 ft: 13454 corp: 5/667b lim: 320 exec/s: 0 rss: 72Mb L: 190/218 MS: 1 CrossOver- 00:24:57.764 [2024-06-10 11:35:41.521043] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:24:57.764 [2024-06-10 11:35:41.521068] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:57.764 [2024-06-10 11:35:41.521170] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:5 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:24:57.764 [2024-06-10 11:35:41.521187] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:57.764 [2024-06-10 11:35:41.521281] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:6 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:24:57.764 [2024-06-10 11:35:41.521299] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:57.764 #37 NEW cov: 12079 ft: 13796 corp: 6/867b lim: 320 exec/s: 0 rss: 72Mb L: 200/218 MS: 1 InsertRepeatedBytes- 00:24:57.764 [2024-06-10 11:35:41.581085] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:33333333 SGL TRANSPORT DATA BLOCK TRANSPORT 0x3333333333333333 00:24:57.764 [2024-06-10 11:35:41.581112] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:57.764 #38 NEW cov: 12096 ft: 13915 corp: 7/943b lim: 320 exec/s: 0 rss: 72Mb L: 76/218 MS: 1 InsertRepeatedBytes- 00:24:57.764 [2024-06-10 11:35:41.631623] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:24:57.764 [2024-06-10 11:35:41.631648] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:57.764 [2024-06-10 11:35:41.631738] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:5 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:24:57.764 [2024-06-10 11:35:41.631754] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:57.764 #39 NEW cov: 12096 ft: 13978 corp: 8/1133b lim: 320 exec/s: 0 rss: 72Mb L: 190/218 MS: 1 ShuffleBytes- 00:24:57.764 [2024-06-10 11:35:41.692237] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:24:57.764 [2024-06-10 11:35:41.692267] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:57.764 [2024-06-10 11:35:41.692360] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff 00:24:57.764 [2024-06-10 11:35:41.692375] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:57.764 [2024-06-10 11:35:41.692474] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:6 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:24:57.764 [2024-06-10 11:35:41.692491] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:57.764 #40 NEW cov: 12107 ft: 14053 corp: 9/1366b lim: 320 exec/s: 0 rss: 72Mb L: 233/233 MS: 1 InsertRepeatedBytes- 00:24:58.023 [2024-06-10 11:35:41.742577] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:24:58.023 [2024-06-10 11:35:41.742604] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:58.023 [2024-06-10 11:35:41.742706] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:5 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:24:58.023 [2024-06-10 11:35:41.742723] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:58.023 [2024-06-10 11:35:41.742820] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:6 nsid:ffffffff cdw10:ffffffff cdw11:fffbffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:24:58.023 [2024-06-10 11:35:41.742837] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:58.023 #41 NEW cov: 12107 ft: 14066 corp: 10/1566b lim: 320 exec/s: 0 rss: 72Mb L: 200/233 MS: 1 ChangeBit- 00:24:58.023 [2024-06-10 11:35:41.803177] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:06060606 cdw11:06060606 SGL TRANSPORT DATA BLOCK TRANSPORT 0x606060606060606 00:24:58.023 [2024-06-10 11:35:41.803202] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:58.023 [2024-06-10 11:35:41.803409] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:ffff0606 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:24:58.023 [2024-06-10 11:35:41.803426] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:58.023 #42 NEW cov: 12107 ft: 14134 corp: 11/1784b lim: 320 exec/s: 0 rss: 72Mb L: 218/233 MS: 1 ShuffleBytes- 00:24:58.024 [2024-06-10 11:35:41.863299] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:24:58.024 [2024-06-10 11:35:41.863325] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:58.024 [2024-06-10 11:35:41.863417] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:5 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:24:58.024 [2024-06-10 11:35:41.863433] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:58.024 #46 NEW cov: 12107 ft: 14165 corp: 12/1937b lim: 320 exec/s: 0 rss: 72Mb L: 153/233 MS: 4 ShuffleBytes-CrossOver-CMP-CrossOver- DE: "\000\016\335H\011\252\312n"- 00:24:58.024 [2024-06-10 11:35:41.913933] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:06060606 cdw11:06060606 SGL TRANSPORT DATA BLOCK TRANSPORT 0x606060606060606 00:24:58.024 [2024-06-10 11:35:41.913958] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:58.024 [2024-06-10 11:35:41.914149] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:ffff0606 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:24:58.024 [2024-06-10 11:35:41.914166] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:58.024 #47 NEW cov: 12107 ft: 14185 corp: 13/2155b lim: 320 exec/s: 47 rss: 73Mb L: 218/233 MS: 1 ChangeBit- 00:24:58.283 [2024-06-10 11:35:41.984252] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:24:58.283 [2024-06-10 11:35:41.984281] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:58.283 [2024-06-10 11:35:41.984376] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff 00:24:58.283 [2024-06-10 11:35:41.984401] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:58.283 #48 NEW cov: 12107 ft: 14233 corp: 14/2317b lim: 320 exec/s: 48 rss: 73Mb L: 162/233 MS: 1 EraseBytes- 00:24:58.283 [2024-06-10 11:35:42.055249] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:24:58.283 [2024-06-10 11:35:42.055276] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:58.283 [2024-06-10 11:35:42.055374] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff 00:24:58.283 [2024-06-10 11:35:42.055391] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:58.283 [2024-06-10 11:35:42.055487] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:6 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.283 [2024-06-10 11:35:42.055506] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:58.283 [2024-06-10 11:35:42.055601] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:7 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:24:58.283 [2024-06-10 11:35:42.055619] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:58.283 #49 NEW cov: 12107 ft: 14407 corp: 15/2613b lim: 320 exec/s: 49 rss: 73Mb L: 296/296 MS: 1 CrossOver- 00:24:58.283 [2024-06-10 11:35:42.105907] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:06060606 cdw11:06060606 SGL TRANSPORT DATA BLOCK TRANSPORT 0x606060606060606 00:24:58.283 [2024-06-10 11:35:42.105935] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:58.283 [2024-06-10 11:35:42.106140] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:ffff0606 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:24:58.283 [2024-06-10 11:35:42.106159] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:58.283 [2024-06-10 11:35:42.106252] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:7 nsid:202020ff cdw10:20202020 cdw11:20202020 SGL TRANSPORT DATA BLOCK TRANSPORT 0x2020202020202020 00:24:58.283 [2024-06-10 11:35:42.106269] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:58.283 #55 NEW cov: 12107 ft: 14442 corp: 16/2891b lim: 320 exec/s: 55 rss: 73Mb L: 278/296 MS: 1 InsertRepeatedBytes- 00:24:58.283 [2024-06-10 11:35:42.175495] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:24:58.283 [2024-06-10 11:35:42.175524] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:58.283 [2024-06-10 11:35:42.175627] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:5 nsid:ffffff56 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:24:58.283 [2024-06-10 11:35:42.175645] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:58.283 #56 NEW cov: 12107 ft: 14465 corp: 17/3052b lim: 320 exec/s: 56 rss: 73Mb L: 161/296 MS: 1 InsertByte- 00:24:58.283 [2024-06-10 11:35:42.226136] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:06060606 cdw11:06060606 SGL TRANSPORT DATA BLOCK TRANSPORT 0x606060606060606 00:24:58.283 [2024-06-10 11:35:42.226164] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:58.283 [2024-06-10 11:35:42.226368] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:ffff0606 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:24:58.283 [2024-06-10 11:35:42.226390] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:58.542 #57 NEW cov: 12107 ft: 14496 corp: 18/3270b lim: 320 exec/s: 57 rss: 73Mb L: 218/296 MS: 1 ShuffleBytes- 00:24:58.542 [2024-06-10 11:35:42.276083] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:ffff6eca cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:24:58.542 [2024-06-10 11:35:42.276110] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:58.542 [2024-06-10 11:35:42.276208] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:5 nsid:ffffff56 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:24:58.542 [2024-06-10 11:35:42.276229] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:58.542 #58 NEW cov: 12107 ft: 14517 corp: 19/3431b lim: 320 exec/s: 58 rss: 73Mb L: 161/296 MS: 1 PersAutoDict- DE: "\000\016\335H\011\252\312n"- 00:24:58.542 [2024-06-10 11:35:42.336584] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:6ecaaa09 cdw10:ffffffff cdw11:0948ddff 00:24:58.542 [2024-06-10 11:35:42.336611] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:58.542 #63 NEW cov: 12107 ft: 14547 corp: 20/3513b lim: 320 exec/s: 63 rss: 73Mb L: 82/296 MS: 5 CopyPart-InsertByte-PersAutoDict-CrossOver-CopyPart- DE: "\000\016\335H\011\252\312n"- 00:24:58.543 [2024-06-10 11:35:42.386939] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:24:58.543 [2024-06-10 11:35:42.386966] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:58.543 #64 NEW cov: 12110 ft: 14685 corp: 21/3612b lim: 320 exec/s: 64 rss: 73Mb L: 99/296 MS: 1 InsertByte- 00:24:58.543 [2024-06-10 11:35:42.438114] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:06060606 cdw11:06060606 SGL TRANSPORT DATA BLOCK TRANSPORT 0x606060606060606 00:24:58.543 [2024-06-10 11:35:42.438140] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:58.543 [2024-06-10 11:35:42.438358] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:ffff0606 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:24:58.543 [2024-06-10 11:35:42.438377] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:58.543 #65 NEW cov: 12110 ft: 14696 corp: 22/3830b lim: 320 exec/s: 65 rss: 73Mb L: 218/296 MS: 1 ShuffleBytes- 00:24:58.543 [2024-06-10 11:35:42.488183] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:24:58.543 [2024-06-10 11:35:42.488211] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:58.543 [2024-06-10 11:35:42.488304] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff 00:24:58.543 [2024-06-10 11:35:42.488322] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:58.543 [2024-06-10 11:35:42.488424] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:6 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:24:58.543 [2024-06-10 11:35:42.488441] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:58.801 #66 NEW cov: 12110 ft: 14745 corp: 23/4063b lim: 320 exec/s: 66 rss: 73Mb L: 233/296 MS: 1 ChangeBit- 00:24:58.801 [2024-06-10 11:35:42.538732] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:06060606 cdw11:06060606 SGL TRANSPORT DATA BLOCK TRANSPORT 0x606060606060606 00:24:58.801 [2024-06-10 11:35:42.538757] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:58.801 [2024-06-10 11:35:42.538969] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:ffff0606 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:24:58.801 [2024-06-10 11:35:42.538987] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:58.801 #67 NEW cov: 12110 ft: 14758 corp: 24/4281b lim: 320 exec/s: 67 rss: 73Mb L: 218/296 MS: 1 PersAutoDict- DE: "\000\016\335H\011\252\312n"- 00:24:58.801 [2024-06-10 11:35:42.598594] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:24:58.801 [2024-06-10 11:35:42.598619] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:58.801 #68 NEW cov: 12110 ft: 14763 corp: 25/4395b lim: 320 exec/s: 68 rss: 73Mb L: 114/296 MS: 1 EraseBytes- 00:24:58.802 [2024-06-10 11:35:42.659359] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:24:58.802 [2024-06-10 11:35:42.659385] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:58.802 [2024-06-10 11:35:42.659477] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:5 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:24:58.802 [2024-06-10 11:35:42.659495] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:58.802 [2024-06-10 11:35:42.659582] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:6 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:24:58.802 [2024-06-10 11:35:42.659598] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:58.802 #69 NEW cov: 12110 ft: 14817 corp: 26/4604b lim: 320 exec/s: 69 rss: 73Mb L: 209/296 MS: 1 InsertRepeatedBytes- 00:24:58.802 [2024-06-10 11:35:42.720226] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:06060606 cdw11:06060606 SGL TRANSPORT DATA BLOCK TRANSPORT 0x606060606060606 00:24:58.802 [2024-06-10 11:35:42.720253] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:58.802 [2024-06-10 11:35:42.720464] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:ffff0606 cdw10:ffffffff cdw11:ff0affff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:24:58.802 [2024-06-10 11:35:42.720483] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:59.061 #70 NEW cov: 12110 ft: 14846 corp: 27/4822b lim: 320 exec/s: 70 rss: 73Mb L: 218/296 MS: 1 ChangeByte- 00:24:59.061 [2024-06-10 11:35:42.780211] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:24:59.061 [2024-06-10 11:35:42.780237] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:59.061 [2024-06-10 11:35:42.780336] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:5 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:24:59.061 [2024-06-10 11:35:42.780354] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:59.061 [2024-06-10 11:35:42.780457] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:6 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:24:59.061 [2024-06-10 11:35:42.780474] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:59.061 #71 NEW cov: 12110 ft: 14860 corp: 28/5022b lim: 320 exec/s: 71 rss: 73Mb L: 200/296 MS: 1 ShuffleBytes- 00:24:59.061 [2024-06-10 11:35:42.830284] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:24:59.061 [2024-06-10 11:35:42.830315] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:59.061 NEW_FUNC[1/1]: 0x1a77150 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:601 00:24:59.061 #73 NEW cov: 12133 ft: 14898 corp: 29/5125b lim: 320 exec/s: 73 rss: 73Mb L: 103/296 MS: 2 CopyPart-InsertRepeatedBytes- 00:24:59.061 [2024-06-10 11:35:42.880612] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xf7ffffffffffffff 00:24:59.061 [2024-06-10 11:35:42.880637] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:59.062 [2024-06-10 11:35:42.880723] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:5 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:24:59.062 [2024-06-10 11:35:42.880740] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:59.062 #74 NEW cov: 12133 ft: 14906 corp: 30/5315b lim: 320 exec/s: 74 rss: 73Mb L: 190/296 MS: 1 ChangeBinInt- 00:24:59.062 [2024-06-10 11:35:42.930608] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:24:59.062 [2024-06-10 11:35:42.930636] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:59.062 #75 NEW cov: 12133 ft: 14914 corp: 31/5418b lim: 320 exec/s: 37 rss: 73Mb L: 103/296 MS: 1 EraseBytes- 00:24:59.062 #75 DONE cov: 12133 ft: 14914 corp: 31/5418b lim: 320 exec/s: 37 rss: 73Mb 00:24:59.062 ###### Recommended dictionary. ###### 00:24:59.062 "\000\016\335H\011\252\312n" # Uses: 4 00:24:59.062 ###### End of recommended dictionary. ###### 00:24:59.062 Done 75 runs in 2 second(s) 00:24:59.321 11:35:43 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_0.conf /var/tmp/suppress_nvmf_fuzz 00:24:59.321 11:35:43 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:24:59.321 11:35:43 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:24:59.321 11:35:43 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 1 1 0x1 00:24:59.321 11:35:43 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=1 00:24:59.321 11:35:43 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:24:59.321 11:35:43 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:24:59.321 11:35:43 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 00:24:59.321 11:35:43 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_1.conf 00:24:59.322 11:35:43 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:24:59.322 11:35:43 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:24:59.322 11:35:43 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 1 00:24:59.322 11:35:43 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4401 00:24:59.322 11:35:43 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 00:24:59.322 11:35:43 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4401' 00:24:59.322 11:35:43 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4401"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:24:59.322 11:35:43 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:24:59.322 11:35:43 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:24:59.322 11:35:43 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4401' -c /tmp/fuzz_json_1.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 -Z 1 00:24:59.322 [2024-06-10 11:35:43.146113] Starting SPDK v24.09-pre git sha1 a3f6419f1 / DPDK 24.03.0 initialization... 00:24:59.322 [2024-06-10 11:35:43.146183] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3328610 ] 00:24:59.322 EAL: No free 2048 kB hugepages reported on node 1 00:24:59.581 [2024-06-10 11:35:43.465770] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:59.840 [2024-06-10 11:35:43.553068] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:24:59.840 [2024-06-10 11:35:43.613763] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:59.840 [2024-06-10 11:35:43.630027] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4401 *** 00:24:59.840 INFO: Running with entropic power schedule (0xFF, 100). 00:24:59.840 INFO: Seed: 905861433 00:24:59.841 INFO: Loaded 1 modules (357716 inline 8-bit counters): 357716 [0x29a784c, 0x29feda0), 00:24:59.841 INFO: Loaded 1 PC tables (357716 PCs): 357716 [0x29feda0,0x2f742e0), 00:24:59.841 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 00:24:59.841 INFO: A corpus is not provided, starting from an empty corpus 00:24:59.841 #2 INITED exec/s: 0 rss: 65Mb 00:24:59.841 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:24:59.841 This may also happen if the target rejected all inputs we tried so far 00:24:59.841 [2024-06-10 11:35:43.685199] ctrlr.c:2614:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:24:59.841 [2024-06-10 11:35:43.685349] ctrlr.c:2614:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:24:59.841 [2024-06-10 11:35:43.685562] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0aff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.841 [2024-06-10 11:35:43.685592] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:59.841 [2024-06-10 11:35:43.685652] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.841 [2024-06-10 11:35:43.685668] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:00.100 NEW_FUNC[1/687]: 0x483780 in fuzz_admin_get_log_page_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:67 00:25:00.100 NEW_FUNC[2/687]: 0x4bf580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:25:00.100 #4 NEW cov: 11884 ft: 11887 corp: 2/14b lim: 30 exec/s: 0 rss: 72Mb L: 13/13 MS: 2 ShuffleBytes-InsertRepeatedBytes- 00:25:00.100 [2024-06-10 11:35:44.025953] ctrlr.c:2614:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:25:00.100 [2024-06-10 11:35:44.026205] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0aff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.100 [2024-06-10 11:35:44.026238] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:00.359 #5 NEW cov: 12018 ft: 12721 corp: 3/25b lim: 30 exec/s: 0 rss: 72Mb L: 11/13 MS: 1 EraseBytes- 00:25:00.359 [2024-06-10 11:35:44.076041] ctrlr.c:2614:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:25:00.359 [2024-06-10 11:35:44.076260] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0aff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.359 [2024-06-10 11:35:44.076286] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:00.359 #6 NEW cov: 12024 ft: 13096 corp: 4/35b lim: 30 exec/s: 0 rss: 72Mb L: 10/13 MS: 1 EraseBytes- 00:25:00.359 [2024-06-10 11:35:44.116229] ctrlr.c:2614:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:25:00.359 [2024-06-10 11:35:44.116347] ctrlr.c:2614:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:25:00.359 [2024-06-10 11:35:44.116463] ctrlr.c:2614:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:25:00.359 [2024-06-10 11:35:44.116669] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a0a830a cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.359 [2024-06-10 11:35:44.116695] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:00.359 [2024-06-10 11:35:44.116751] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.359 [2024-06-10 11:35:44.116766] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:00.359 [2024-06-10 11:35:44.116818] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.359 [2024-06-10 11:35:44.116833] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:00.359 #11 NEW cov: 12109 ft: 13606 corp: 5/58b lim: 30 exec/s: 0 rss: 72Mb L: 23/23 MS: 5 CopyPart-CrossOver-ShuffleBytes-InsertByte-InsertRepeatedBytes- 00:25:00.359 [2024-06-10 11:35:44.156305] ctrlr.c:2614:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000a3a3 00:25:00.359 [2024-06-10 11:35:44.156533] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0aa383a3 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.359 [2024-06-10 11:35:44.156558] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:00.359 #12 NEW cov: 12109 ft: 13688 corp: 6/69b lim: 30 exec/s: 0 rss: 72Mb L: 11/23 MS: 1 InsertRepeatedBytes- 00:25:00.359 [2024-06-10 11:35:44.196368] ctrlr.c:2614:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000a3a3 00:25:00.359 [2024-06-10 11:35:44.196599] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a0083a3 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.359 [2024-06-10 11:35:44.196628] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:00.359 #13 NEW cov: 12109 ft: 13812 corp: 7/80b lim: 30 exec/s: 0 rss: 72Mb L: 11/23 MS: 1 ChangeByte- 00:25:00.359 [2024-06-10 11:35:44.246499] ctrlr.c:2614:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000a3a3 00:25:00.359 [2024-06-10 11:35:44.246709] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a0083a3 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.359 [2024-06-10 11:35:44.246735] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:00.360 #14 NEW cov: 12109 ft: 13863 corp: 8/91b lim: 30 exec/s: 0 rss: 73Mb L: 11/23 MS: 1 ChangeByte- 00:25:00.360 [2024-06-10 11:35:44.296645] ctrlr.c:2614:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:25:00.360 [2024-06-10 11:35:44.296867] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0cff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.360 [2024-06-10 11:35:44.296893] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:00.619 #15 NEW cov: 12109 ft: 13942 corp: 9/102b lim: 30 exec/s: 0 rss: 73Mb L: 11/23 MS: 1 ChangeByte- 00:25:00.619 [2024-06-10 11:35:44.346790] ctrlr.c:2614:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000a3a2 00:25:00.619 [2024-06-10 11:35:44.346997] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a0083a3 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.619 [2024-06-10 11:35:44.347022] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:00.619 #21 NEW cov: 12109 ft: 13983 corp: 10/113b lim: 30 exec/s: 0 rss: 73Mb L: 11/23 MS: 1 ChangeBit- 00:25:00.619 [2024-06-10 11:35:44.396949] ctrlr.c:2614:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:25:00.619 [2024-06-10 11:35:44.397077] ctrlr.c:2614:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:25:00.619 [2024-06-10 11:35:44.397184] ctrlr.c:2614:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:25:00.619 [2024-06-10 11:35:44.397394] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0aff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.619 [2024-06-10 11:35:44.397421] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:00.619 [2024-06-10 11:35:44.397478] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.619 [2024-06-10 11:35:44.397494] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:00.619 [2024-06-10 11:35:44.397547] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.619 [2024-06-10 11:35:44.397561] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:00.619 #22 NEW cov: 12109 ft: 14004 corp: 11/136b lim: 30 exec/s: 0 rss: 73Mb L: 23/23 MS: 1 InsertRepeatedBytes- 00:25:00.619 [2024-06-10 11:35:44.437081] ctrlr.c:2614:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:25:00.619 [2024-06-10 11:35:44.437300] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0aff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.619 [2024-06-10 11:35:44.437326] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:00.619 #23 NEW cov: 12109 ft: 14018 corp: 12/146b lim: 30 exec/s: 0 rss: 73Mb L: 10/23 MS: 1 ChangeBit- 00:25:00.619 [2024-06-10 11:35:44.487112] ctrlr.c:2614:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ada3 00:25:00.619 [2024-06-10 11:35:44.487331] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a0083a3 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.619 [2024-06-10 11:35:44.487357] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:00.619 #24 NEW cov: 12109 ft: 14074 corp: 13/157b lim: 30 exec/s: 0 rss: 73Mb L: 11/23 MS: 1 ChangeByte- 00:25:00.619 [2024-06-10 11:35:44.527310] ctrlr.c:2614:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:25:00.619 [2024-06-10 11:35:44.527424] ctrlr.c:2614:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:25:00.619 [2024-06-10 11:35:44.527529] ctrlr.c:2614:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:25:00.619 [2024-06-10 11:35:44.527758] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a0083fd cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.619 [2024-06-10 11:35:44.527784] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:00.619 [2024-06-10 11:35:44.527841] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.619 [2024-06-10 11:35:44.527857] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:00.619 [2024-06-10 11:35:44.527912] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.619 [2024-06-10 11:35:44.527926] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:00.619 NEW_FUNC[1/1]: 0x1a77150 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:601 00:25:00.619 #25 NEW cov: 12132 ft: 14167 corp: 14/180b lim: 30 exec/s: 0 rss: 73Mb L: 23/23 MS: 1 ChangeBinInt- 00:25:00.878 [2024-06-10 11:35:44.577462] ctrlr.c:2614:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000a3a3 00:25:00.878 [2024-06-10 11:35:44.577581] ctrlr.c:2626:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (166916) > buf size (4096) 00:25:00.878 [2024-06-10 11:35:44.577788] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a0083a3 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.878 [2024-06-10 11:35:44.577820] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:00.878 [2024-06-10 11:35:44.577878] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:a3000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.878 [2024-06-10 11:35:44.577892] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:00.878 #26 NEW cov: 12155 ft: 14270 corp: 15/196b lim: 30 exec/s: 0 rss: 73Mb L: 16/23 MS: 1 InsertRepeatedBytes- 00:25:00.878 [2024-06-10 11:35:44.617620] ctrlr.c:2614:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:25:00.878 [2024-06-10 11:35:44.617753] ctrlr.c:2614:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x1ff 00:25:00.878 [2024-06-10 11:35:44.617861] ctrlr.c:2614:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:25:00.878 [2024-06-10 11:35:44.617971] ctrlr.c:2614:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:25:00.878 [2024-06-10 11:35:44.618177] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a0a830a cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.878 [2024-06-10 11:35:44.618203] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:00.878 [2024-06-10 11:35:44.618280] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.878 [2024-06-10 11:35:44.618295] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:00.878 [2024-06-10 11:35:44.618350] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.878 [2024-06-10 11:35:44.618364] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:00.878 [2024-06-10 11:35:44.618418] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.878 [2024-06-10 11:35:44.618432] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:00.878 #27 NEW cov: 12155 ft: 14782 corp: 16/221b lim: 30 exec/s: 0 rss: 73Mb L: 25/25 MS: 1 CMP- DE: "\000\001"- 00:25:00.878 [2024-06-10 11:35:44.667798] ctrlr.c:2614:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:25:00.878 [2024-06-10 11:35:44.667931] ctrlr.c:2614:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100004949 00:25:00.878 [2024-06-10 11:35:44.668043] ctrlr.c:2614:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000ffff 00:25:00.878 [2024-06-10 11:35:44.668149] ctrlr.c:2614:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:25:00.878 [2024-06-10 11:35:44.668263] ctrlr.c:2614:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ff3d 00:25:00.878 [2024-06-10 11:35:44.668501] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a0a830a cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.878 [2024-06-10 11:35:44.668527] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:00.878 [2024-06-10 11:35:44.668582] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ff498149 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.878 [2024-06-10 11:35:44.668600] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:00.879 [2024-06-10 11:35:44.668654] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff8100 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.879 [2024-06-10 11:35:44.668668] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:00.879 [2024-06-10 11:35:44.668723] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.879 [2024-06-10 11:35:44.668737] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:00.879 [2024-06-10 11:35:44.668792] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:8 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.879 [2024-06-10 11:35:44.668806] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:00.879 #28 NEW cov: 12155 ft: 14859 corp: 17/251b lim: 30 exec/s: 28 rss: 73Mb L: 30/30 MS: 1 InsertRepeatedBytes- 00:25:00.879 [2024-06-10 11:35:44.717869] ctrlr.c:2614:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000ffff 00:25:00.879 [2024-06-10 11:35:44.718107] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0cff81ff cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.879 [2024-06-10 11:35:44.718133] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:00.879 #29 NEW cov: 12155 ft: 14947 corp: 18/262b lim: 30 exec/s: 29 rss: 73Mb L: 11/30 MS: 1 ChangeByte- 00:25:00.879 [2024-06-10 11:35:44.767946] ctrlr.c:2614:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ada3 00:25:00.879 [2024-06-10 11:35:44.768175] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a0083e3 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.879 [2024-06-10 11:35:44.768201] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:00.879 #30 NEW cov: 12155 ft: 15013 corp: 19/273b lim: 30 exec/s: 30 rss: 73Mb L: 11/30 MS: 1 ChangeBit- 00:25:00.879 [2024-06-10 11:35:44.818131] ctrlr.c:2614:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ada3 00:25:00.879 [2024-06-10 11:35:44.818344] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a0083a3 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.879 [2024-06-10 11:35:44.818377] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:01.138 #31 NEW cov: 12155 ft: 15033 corp: 20/284b lim: 30 exec/s: 31 rss: 73Mb L: 11/30 MS: 1 ShuffleBytes- 00:25:01.138 [2024-06-10 11:35:44.858289] ctrlr.c:2614:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ff49 00:25:01.138 [2024-06-10 11:35:44.858423] ctrlr.c:2614:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x1000049ff 00:25:01.138 [2024-06-10 11:35:44.858530] ctrlr.c:2614:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:25:01.138 [2024-06-10 11:35:44.858634] ctrlr.c:2614:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:25:01.138 [2024-06-10 11:35:44.858846] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a0a830a cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.138 [2024-06-10 11:35:44.858872] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:01.138 [2024-06-10 11:35:44.858927] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:49498149 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.138 [2024-06-10 11:35:44.858945] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:01.138 [2024-06-10 11:35:44.858998] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.138 [2024-06-10 11:35:44.859013] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:01.138 [2024-06-10 11:35:44.859066] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.138 [2024-06-10 11:35:44.859080] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:01.138 #32 NEW cov: 12155 ft: 15043 corp: 21/313b lim: 30 exec/s: 32 rss: 74Mb L: 29/30 MS: 1 InsertRepeatedBytes- 00:25:01.138 [2024-06-10 11:35:44.898427] ctrlr.c:2614:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ff49 00:25:01.138 [2024-06-10 11:35:44.898542] ctrlr.c:2614:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x1000049ff 00:25:01.138 [2024-06-10 11:35:44.898650] ctrlr.c:2614:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:25:01.138 [2024-06-10 11:35:44.898755] ctrlr.c:2614:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:25:01.138 [2024-06-10 11:35:44.898973] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a0a830a cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.138 [2024-06-10 11:35:44.898999] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:01.138 [2024-06-10 11:35:44.899055] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:49498149 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.138 [2024-06-10 11:35:44.899069] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:01.138 [2024-06-10 11:35:44.899122] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ff6083ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.138 [2024-06-10 11:35:44.899136] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:01.138 [2024-06-10 11:35:44.899187] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.138 [2024-06-10 11:35:44.899201] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:01.138 #33 NEW cov: 12155 ft: 15049 corp: 22/342b lim: 30 exec/s: 33 rss: 74Mb L: 29/30 MS: 1 ChangeByte- 00:25:01.138 [2024-06-10 11:35:44.948480] ctrlr.c:2614:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000a3a3 00:25:01.139 [2024-06-10 11:35:44.948700] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0aa383ad cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.139 [2024-06-10 11:35:44.948725] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:01.139 #34 NEW cov: 12155 ft: 15058 corp: 23/351b lim: 30 exec/s: 34 rss: 74Mb L: 9/30 MS: 1 EraseBytes- 00:25:01.139 [2024-06-10 11:35:44.998640] ctrlr.c:2614:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x1000047ea 00:25:01.139 [2024-06-10 11:35:44.998773] ctrlr.c:2614:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x2000000ff 00:25:01.139 [2024-06-10 11:35:44.998885] ctrlr.c:2614:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:25:01.139 [2024-06-10 11:35:44.999092] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0aff81ff cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.139 [2024-06-10 11:35:44.999118] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:01.139 [2024-06-10 11:35:44.999178] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:b74902dd cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.139 [2024-06-10 11:35:44.999193] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:01.139 [2024-06-10 11:35:44.999251] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.139 [2024-06-10 11:35:44.999266] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:01.139 #35 NEW cov: 12155 ft: 15084 corp: 24/372b lim: 30 exec/s: 35 rss: 74Mb L: 21/30 MS: 1 CMP- DE: "]G\352\267I\335\016\000"- 00:25:01.139 [2024-06-10 11:35:45.038794] ctrlr.c:2614:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ff49 00:25:01.139 [2024-06-10 11:35:45.038926] ctrlr.c:2614:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x1000049ff 00:25:01.139 [2024-06-10 11:35:45.039037] ctrlr.c:2614:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:25:01.139 [2024-06-10 11:35:45.039145] ctrlr.c:2614:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:25:01.139 [2024-06-10 11:35:45.039371] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a0a830a cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.139 [2024-06-10 11:35:45.039398] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:01.139 [2024-06-10 11:35:45.039454] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:49498149 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.139 [2024-06-10 11:35:45.039469] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:01.139 [2024-06-10 11:35:45.039521] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ff6083ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.139 [2024-06-10 11:35:45.039535] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:01.139 [2024-06-10 11:35:45.039586] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.139 [2024-06-10 11:35:45.039599] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:01.139 #36 NEW cov: 12155 ft: 15113 corp: 25/397b lim: 30 exec/s: 36 rss: 74Mb L: 25/30 MS: 1 EraseBytes- 00:25:01.399 [2024-06-10 11:35:45.088946] ctrlr.c:2614:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x1000047ea 00:25:01.399 [2024-06-10 11:35:45.089068] ctrlr.c:2626:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (711976) > buf size (4096) 00:25:01.399 [2024-06-10 11:35:45.089181] ctrlr.c:2614:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:25:01.399 [2024-06-10 11:35:45.089399] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0aff81ff cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.399 [2024-06-10 11:35:45.089425] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:01.399 [2024-06-10 11:35:45.089483] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:b74902dd cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.399 [2024-06-10 11:35:45.089498] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:01.399 [2024-06-10 11:35:45.089552] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:01ff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.399 [2024-06-10 11:35:45.089566] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:01.399 #37 NEW cov: 12155 ft: 15149 corp: 26/418b lim: 30 exec/s: 37 rss: 74Mb L: 21/30 MS: 1 PersAutoDict- DE: "\000\001"- 00:25:01.399 [2024-06-10 11:35:45.139053] ctrlr.c:2614:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ff49 00:25:01.399 [2024-06-10 11:35:45.139169] ctrlr.c:2614:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x1000049ff 00:25:01.399 [2024-06-10 11:35:45.139301] ctrlr.c:2614:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:25:01.399 [2024-06-10 11:35:45.139521] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a0a830a cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.399 [2024-06-10 11:35:45.139547] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:01.399 [2024-06-10 11:35:45.139605] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:49498149 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.399 [2024-06-10 11:35:45.139620] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:01.399 [2024-06-10 11:35:45.139674] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.399 [2024-06-10 11:35:45.139688] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:01.399 #38 NEW cov: 12155 ft: 15181 corp: 27/437b lim: 30 exec/s: 38 rss: 74Mb L: 19/30 MS: 1 EraseBytes- 00:25:01.399 [2024-06-10 11:35:45.189256] ctrlr.c:2614:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ff49 00:25:01.399 [2024-06-10 11:35:45.189390] ctrlr.c:2614:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x1000049ff 00:25:01.399 [2024-06-10 11:35:45.189500] ctrlr.c:2614:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ff3d 00:25:01.399 [2024-06-10 11:35:45.189709] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a0a830a cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.399 [2024-06-10 11:35:45.189734] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:01.399 [2024-06-10 11:35:45.189791] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:49498149 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.399 [2024-06-10 11:35:45.189804] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:01.399 [2024-06-10 11:35:45.189860] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.399 [2024-06-10 11:35:45.189874] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:01.399 #39 NEW cov: 12155 ft: 15192 corp: 28/455b lim: 30 exec/s: 39 rss: 74Mb L: 18/30 MS: 1 EraseBytes- 00:25:01.399 [2024-06-10 11:35:45.229353] ctrlr.c:2614:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ff49 00:25:01.399 [2024-06-10 11:35:45.229484] ctrlr.c:2614:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100004949 00:25:01.399 [2024-06-10 11:35:45.229591] ctrlr.c:2614:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:25:01.399 [2024-06-10 11:35:45.229801] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a0a830a cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.399 [2024-06-10 11:35:45.229826] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:01.399 [2024-06-10 11:35:45.229883] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00018149 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.399 [2024-06-10 11:35:45.229897] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:01.399 [2024-06-10 11:35:45.229955] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:49ff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.399 [2024-06-10 11:35:45.229968] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:01.399 #40 NEW cov: 12155 ft: 15228 corp: 29/475b lim: 30 exec/s: 40 rss: 74Mb L: 20/30 MS: 1 PersAutoDict- DE: "\000\001"- 00:25:01.399 [2024-06-10 11:35:45.279474] ctrlr.c:2614:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:25:01.399 [2024-06-10 11:35:45.279592] ctrlr.c:2614:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:25:01.399 [2024-06-10 11:35:45.279701] ctrlr.c:2614:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:25:01.399 [2024-06-10 11:35:45.279910] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0cff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.399 [2024-06-10 11:35:45.279936] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:01.399 [2024-06-10 11:35:45.279992] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.399 [2024-06-10 11:35:45.280007] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:01.399 [2024-06-10 11:35:45.280059] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.399 [2024-06-10 11:35:45.280073] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:01.399 #41 NEW cov: 12155 ft: 15231 corp: 30/494b lim: 30 exec/s: 41 rss: 74Mb L: 19/30 MS: 1 CopyPart- 00:25:01.399 [2024-06-10 11:35:45.319591] ctrlr.c:2614:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ef49 00:25:01.399 [2024-06-10 11:35:45.319708] ctrlr.c:2614:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x1000049ff 00:25:01.399 [2024-06-10 11:35:45.319823] ctrlr.c:2614:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ff3d 00:25:01.399 [2024-06-10 11:35:45.320041] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a0a830a cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.399 [2024-06-10 11:35:45.320067] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:01.399 [2024-06-10 11:35:45.320122] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:49498149 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.399 [2024-06-10 11:35:45.320137] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:01.399 [2024-06-10 11:35:45.320189] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.399 [2024-06-10 11:35:45.320203] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:01.399 #42 NEW cov: 12155 ft: 15238 corp: 31/512b lim: 30 exec/s: 42 rss: 74Mb L: 18/30 MS: 1 ChangeBit- 00:25:01.659 [2024-06-10 11:35:45.359641] ctrlr.c:2614:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000a3a3 00:25:01.659 [2024-06-10 11:35:45.359853] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a0083a3 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.659 [2024-06-10 11:35:45.359878] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:01.659 #43 NEW cov: 12155 ft: 15296 corp: 32/523b lim: 30 exec/s: 43 rss: 74Mb L: 11/30 MS: 1 ChangeByte- 00:25:01.659 [2024-06-10 11:35:45.399836] ctrlr.c:2614:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:25:01.659 [2024-06-10 11:35:45.399976] ctrlr.c:2614:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x19 00:25:01.659 [2024-06-10 11:35:45.400092] ctrlr.c:2614:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:25:01.659 [2024-06-10 11:35:45.400199] ctrlr.c:2614:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:25:01.659 [2024-06-10 11:35:45.400426] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a0a830a cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.659 [2024-06-10 11:35:45.400452] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:01.659 [2024-06-10 11:35:45.400506] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.659 [2024-06-10 11:35:45.400521] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:01.659 [2024-06-10 11:35:45.400577] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.659 [2024-06-10 11:35:45.400591] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:01.659 [2024-06-10 11:35:45.400646] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.659 [2024-06-10 11:35:45.400659] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:01.659 #44 NEW cov: 12155 ft: 15311 corp: 33/548b lim: 30 exec/s: 44 rss: 74Mb L: 25/30 MS: 1 ChangeBinInt- 00:25:01.659 [2024-06-10 11:35:45.439907] ctrlr.c:2614:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0xe3a3 00:25:01.659 [2024-06-10 11:35:45.440022] ctrlr.c:2614:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000a3a3 00:25:01.659 [2024-06-10 11:35:45.440243] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.659 [2024-06-10 11:35:45.440274] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:01.659 [2024-06-10 11:35:45.440329] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ada383a3 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.659 [2024-06-10 11:35:45.440344] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:01.659 #45 NEW cov: 12155 ft: 15334 corp: 34/561b lim: 30 exec/s: 45 rss: 74Mb L: 13/30 MS: 1 CMP- DE: "\000\000"- 00:25:01.659 [2024-06-10 11:35:45.490069] ctrlr.c:2614:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300000aff 00:25:01.659 [2024-06-10 11:35:45.490204] ctrlr.c:2614:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:25:01.659 [2024-06-10 11:35:45.490428] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.659 [2024-06-10 11:35:45.490453] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:01.659 [2024-06-10 11:35:45.490509] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.659 [2024-06-10 11:35:45.490524] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:01.659 #46 NEW cov: 12155 ft: 15352 corp: 35/575b lim: 30 exec/s: 46 rss: 74Mb L: 14/30 MS: 1 InsertRepeatedBytes- 00:25:01.659 [2024-06-10 11:35:45.540255] ctrlr.c:2614:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000a3a3 00:25:01.659 [2024-06-10 11:35:45.540499] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a0083a3 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.659 [2024-06-10 11:35:45.540528] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:01.659 #47 NEW cov: 12155 ft: 15366 corp: 36/586b lim: 30 exec/s: 47 rss: 74Mb L: 11/30 MS: 1 CopyPart- 00:25:01.659 [2024-06-10 11:35:45.580339] ctrlr.c:2614:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:25:01.659 [2024-06-10 11:35:45.580451] ctrlr.c:2614:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:25:01.659 [2024-06-10 11:35:45.580557] ctrlr.c:2614:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:25:01.659 [2024-06-10 11:35:45.580752] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a0a830a cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.659 [2024-06-10 11:35:45.580778] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:01.659 [2024-06-10 11:35:45.580835] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.659 [2024-06-10 11:35:45.580849] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:01.659 [2024-06-10 11:35:45.580905] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.659 [2024-06-10 11:35:45.580919] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:01.659 #48 NEW cov: 12155 ft: 15375 corp: 37/609b lim: 30 exec/s: 48 rss: 74Mb L: 23/30 MS: 1 ShuffleBytes- 00:25:01.919 [2024-06-10 11:35:45.620366] ctrlr.c:2614:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000a3a3 00:25:01.919 [2024-06-10 11:35:45.620583] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0aa383ad cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.919 [2024-06-10 11:35:45.620609] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:01.919 #49 NEW cov: 12155 ft: 15411 corp: 38/618b lim: 30 exec/s: 49 rss: 74Mb L: 9/30 MS: 1 ChangeByte- 00:25:01.919 [2024-06-10 11:35:45.670664] ctrlr.c:2614:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:25:01.919 [2024-06-10 11:35:45.670782] ctrlr.c:2614:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100004949 00:25:01.919 [2024-06-10 11:35:45.670885] ctrlr.c:2614:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000ffff 00:25:01.919 [2024-06-10 11:35:45.670990] ctrlr.c:2614:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:25:01.919 [2024-06-10 11:35:45.671093] ctrlr.c:2614:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ff3d 00:25:01.919 [2024-06-10 11:35:45.671311] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a0a830a cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.919 [2024-06-10 11:35:45.671337] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:01.919 [2024-06-10 11:35:45.671396] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ff498149 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.919 [2024-06-10 11:35:45.671411] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:01.919 [2024-06-10 11:35:45.671463] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff8100 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.919 [2024-06-10 11:35:45.671477] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:01.919 [2024-06-10 11:35:45.671533] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.919 [2024-06-10 11:35:45.671550] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:01.919 [2024-06-10 11:35:45.671607] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:8 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.919 [2024-06-10 11:35:45.671621] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:01.919 #50 NEW cov: 12155 ft: 15421 corp: 39/648b lim: 30 exec/s: 25 rss: 74Mb L: 30/30 MS: 1 CopyPart- 00:25:01.919 #50 DONE cov: 12155 ft: 15421 corp: 39/648b lim: 30 exec/s: 25 rss: 74Mb 00:25:01.919 ###### Recommended dictionary. ###### 00:25:01.919 "\000\001" # Uses: 2 00:25:01.919 "]G\352\267I\335\016\000" # Uses: 0 00:25:01.919 "\000\000" # Uses: 0 00:25:01.919 ###### End of recommended dictionary. ###### 00:25:01.919 Done 50 runs in 2 second(s) 00:25:01.919 11:35:45 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_1.conf /var/tmp/suppress_nvmf_fuzz 00:25:01.919 11:35:45 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:25:01.919 11:35:45 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:25:01.919 11:35:45 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 2 1 0x1 00:25:01.919 11:35:45 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=2 00:25:01.919 11:35:45 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:25:01.919 11:35:45 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:25:01.919 11:35:45 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 00:25:01.919 11:35:45 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_2.conf 00:25:01.919 11:35:45 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:25:01.919 11:35:45 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:25:01.919 11:35:45 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 2 00:25:01.919 11:35:45 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4402 00:25:01.919 11:35:45 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 00:25:01.919 11:35:45 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4402' 00:25:01.919 11:35:45 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4402"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:25:02.178 11:35:45 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:25:02.178 11:35:45 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:25:02.178 11:35:45 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4402' -c /tmp/fuzz_json_2.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 -Z 2 00:25:02.178 [2024-06-10 11:35:45.898656] Starting SPDK v24.09-pre git sha1 a3f6419f1 / DPDK 24.03.0 initialization... 00:25:02.178 [2024-06-10 11:35:45.898735] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3328974 ] 00:25:02.178 EAL: No free 2048 kB hugepages reported on node 1 00:25:02.437 [2024-06-10 11:35:46.204557] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:02.437 [2024-06-10 11:35:46.298221] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:25:02.437 [2024-06-10 11:35:46.358455] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:02.437 [2024-06-10 11:35:46.374715] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4402 *** 00:25:02.695 INFO: Running with entropic power schedule (0xFF, 100). 00:25:02.695 INFO: Seed: 3650856350 00:25:02.695 INFO: Loaded 1 modules (357716 inline 8-bit counters): 357716 [0x29a784c, 0x29feda0), 00:25:02.695 INFO: Loaded 1 PC tables (357716 PCs): 357716 [0x29feda0,0x2f742e0), 00:25:02.695 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 00:25:02.695 INFO: A corpus is not provided, starting from an empty corpus 00:25:02.695 #2 INITED exec/s: 0 rss: 65Mb 00:25:02.695 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:25:02.695 This may also happen if the target rejected all inputs we tried so far 00:25:02.695 [2024-06-10 11:35:46.429558] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:4545009d cdw11:45004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.695 [2024-06-10 11:35:46.429595] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:02.695 [2024-06-10 11:35:46.429628] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:45450045 cdw11:45004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.695 [2024-06-10 11:35:46.429645] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:02.954 NEW_FUNC[1/685]: 0x486230 in fuzz_admin_identify_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:95 00:25:02.954 NEW_FUNC[2/685]: 0x4bf580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:25:02.954 #6 NEW cov: 11839 ft: 11843 corp: 2/20b lim: 35 exec/s: 0 rss: 72Mb L: 19/19 MS: 4 ChangeByte-CopyPart-CrossOver-InsertRepeatedBytes- 00:25:02.954 [2024-06-10 11:35:46.790985] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:4d45009d cdw11:45004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.954 [2024-06-10 11:35:46.791034] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:02.954 [2024-06-10 11:35:46.791084] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:45450045 cdw11:45004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.954 [2024-06-10 11:35:46.791101] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:02.954 NEW_FUNC[1/1]: 0x1ddbd80 in msg_queue_run_batch /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:814 00:25:02.954 #7 NEW cov: 11974 ft: 12231 corp: 3/39b lim: 35 exec/s: 0 rss: 72Mb L: 19/19 MS: 1 ChangeBit- 00:25:02.954 [2024-06-10 11:35:46.871067] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:4d45009d cdw11:45004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.954 [2024-06-10 11:35:46.871101] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:02.954 [2024-06-10 11:35:46.871135] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:45450045 cdw11:45004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.954 [2024-06-10 11:35:46.871152] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:03.213 #8 NEW cov: 11980 ft: 12381 corp: 4/58b lim: 35 exec/s: 0 rss: 72Mb L: 19/19 MS: 1 ChangeBinInt- 00:25:03.213 [2024-06-10 11:35:46.951204] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:4545009d cdw11:45004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.213 [2024-06-10 11:35:46.951234] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:03.213 [2024-06-10 11:35:46.951298] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:45450045 cdw11:45004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.213 [2024-06-10 11:35:46.951330] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:03.213 #9 NEW cov: 12065 ft: 12754 corp: 5/77b lim: 35 exec/s: 0 rss: 72Mb L: 19/19 MS: 1 ShuffleBytes- 00:25:03.213 [2024-06-10 11:35:47.011403] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:4545009d cdw11:45004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.213 [2024-06-10 11:35:47.011434] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:03.213 [2024-06-10 11:35:47.011467] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:45450045 cdw11:45004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.213 [2024-06-10 11:35:47.011483] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:03.213 #15 NEW cov: 12065 ft: 13026 corp: 6/96b lim: 35 exec/s: 0 rss: 72Mb L: 19/19 MS: 1 ChangeBinInt- 00:25:03.213 [2024-06-10 11:35:47.061487] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:4545009d cdw11:45004549 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.213 [2024-06-10 11:35:47.061519] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:03.213 [2024-06-10 11:35:47.061566] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:45450045 cdw11:45004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.213 [2024-06-10 11:35:47.061583] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:03.213 #16 NEW cov: 12065 ft: 13150 corp: 7/115b lim: 35 exec/s: 0 rss: 72Mb L: 19/19 MS: 1 ChangeBinInt- 00:25:03.213 [2024-06-10 11:35:47.111819] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:4d45009d cdw11:45004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.213 [2024-06-10 11:35:47.111852] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:03.213 [2024-06-10 11:35:47.111885] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:45450045 cdw11:45004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.213 [2024-06-10 11:35:47.111901] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:03.213 [2024-06-10 11:35:47.111930] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:45f10045 cdw11:f100f1f1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.213 [2024-06-10 11:35:47.111946] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:03.213 [2024-06-10 11:35:47.111975] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:f1f100f1 cdw11:f100f1f1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.213 [2024-06-10 11:35:47.111990] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:03.472 #17 NEW cov: 12065 ft: 13755 corp: 8/147b lim: 35 exec/s: 0 rss: 73Mb L: 32/32 MS: 1 InsertRepeatedBytes- 00:25:03.472 [2024-06-10 11:35:47.192006] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:4545009d cdw11:45004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.472 [2024-06-10 11:35:47.192036] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:03.472 [2024-06-10 11:35:47.192084] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:45450045 cdw11:45004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.472 [2024-06-10 11:35:47.192100] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:03.472 [2024-06-10 11:35:47.192129] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:45450045 cdw11:45004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.472 [2024-06-10 11:35:47.192145] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:03.472 [2024-06-10 11:35:47.192173] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:45450045 cdw11:45004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.472 [2024-06-10 11:35:47.192196] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:03.472 #18 NEW cov: 12065 ft: 13868 corp: 9/179b lim: 35 exec/s: 0 rss: 73Mb L: 32/32 MS: 1 CopyPart- 00:25:03.472 [2024-06-10 11:35:47.252045] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:4b45009d cdw11:45004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.472 [2024-06-10 11:35:47.252077] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:03.472 [2024-06-10 11:35:47.252110] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:45450045 cdw11:45004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.472 [2024-06-10 11:35:47.252126] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:03.472 NEW_FUNC[1/1]: 0x1a77150 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:601 00:25:03.472 #19 NEW cov: 12082 ft: 13885 corp: 10/198b lim: 35 exec/s: 0 rss: 73Mb L: 19/32 MS: 1 ChangeBinInt- 00:25:03.472 [2024-06-10 11:35:47.332264] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:4d45009d cdw11:45004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.472 [2024-06-10 11:35:47.332296] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:03.472 [2024-06-10 11:35:47.332329] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:45450045 cdw11:bb004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.472 [2024-06-10 11:35:47.332345] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:03.472 #20 NEW cov: 12082 ft: 13921 corp: 11/217b lim: 35 exec/s: 0 rss: 73Mb L: 19/32 MS: 1 ChangeBinInt- 00:25:03.472 [2024-06-10 11:35:47.382370] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:8645009d cdw11:45004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.472 [2024-06-10 11:35:47.382402] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:03.472 [2024-06-10 11:35:47.382450] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:45450045 cdw11:bb004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.472 [2024-06-10 11:35:47.382466] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:03.732 #21 NEW cov: 12082 ft: 13940 corp: 12/236b lim: 35 exec/s: 21 rss: 73Mb L: 19/32 MS: 1 ChangeByte- 00:25:03.732 [2024-06-10 11:35:47.462704] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:4d45009d cdw11:45004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.732 [2024-06-10 11:35:47.462735] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:03.732 [2024-06-10 11:35:47.462783] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:45450045 cdw11:9d004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.732 [2024-06-10 11:35:47.462799] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:03.732 [2024-06-10 11:35:47.462828] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:45490045 cdw11:45004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.732 [2024-06-10 11:35:47.462844] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:03.732 [2024-06-10 11:35:47.462872] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:45450045 cdw11:45004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.732 [2024-06-10 11:35:47.462892] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:03.732 #22 NEW cov: 12082 ft: 13976 corp: 13/266b lim: 35 exec/s: 22 rss: 73Mb L: 30/32 MS: 1 CrossOver- 00:25:03.732 [2024-06-10 11:35:47.522650] ctrlr.c:2708:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:25:03.732 [2024-06-10 11:35:47.522836] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:4500009d cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.732 [2024-06-10 11:35:47.522859] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:03.732 [2024-06-10 11:35:47.522891] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:45004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.732 [2024-06-10 11:35:47.522909] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:03.732 [2024-06-10 11:35:47.522937] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:45450045 cdw11:45004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.732 [2024-06-10 11:35:47.522953] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:03.732 #23 NEW cov: 12091 ft: 14169 corp: 14/293b lim: 35 exec/s: 23 rss: 73Mb L: 27/32 MS: 1 InsertRepeatedBytes- 00:25:03.732 [2024-06-10 11:35:47.602886] ctrlr.c:2708:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:25:03.732 [2024-06-10 11:35:47.603057] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0000009d cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.732 [2024-06-10 11:35:47.603081] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:03.732 [2024-06-10 11:35:47.603113] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:004b0000 cdw11:45004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.732 [2024-06-10 11:35:47.603130] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:03.732 [2024-06-10 11:35:47.603160] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:45450045 cdw11:45004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.732 [2024-06-10 11:35:47.603176] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:03.732 #24 NEW cov: 12091 ft: 14239 corp: 15/320b lim: 35 exec/s: 24 rss: 73Mb L: 27/32 MS: 1 CMP- DE: "\000\000\000\000\000\000\000\000"- 00:25:03.992 [2024-06-10 11:35:47.683322] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:4d45009d cdw11:45004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.992 [2024-06-10 11:35:47.683353] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:03.992 [2024-06-10 11:35:47.683402] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:45450045 cdw11:ba0045be SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.992 [2024-06-10 11:35:47.683418] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:03.992 [2024-06-10 11:35:47.683448] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:45f100ba cdw11:f100f1f1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.992 [2024-06-10 11:35:47.683464] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:03.992 [2024-06-10 11:35:47.683494] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:f1f100f1 cdw11:f100f1f1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.992 [2024-06-10 11:35:47.683509] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:03.992 #25 NEW cov: 12091 ft: 14256 corp: 16/352b lim: 35 exec/s: 25 rss: 73Mb L: 32/32 MS: 1 ChangeBinInt- 00:25:03.992 [2024-06-10 11:35:47.763400] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:4545009d cdw11:45009d45 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.992 [2024-06-10 11:35:47.763431] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:03.992 [2024-06-10 11:35:47.763479] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:45450045 cdw11:45004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.992 [2024-06-10 11:35:47.763495] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:03.992 #26 NEW cov: 12091 ft: 14284 corp: 17/371b lim: 35 exec/s: 26 rss: 73Mb L: 19/32 MS: 1 CopyPart- 00:25:03.992 [2024-06-10 11:35:47.813437] ctrlr.c:2708:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:25:03.992 [2024-06-10 11:35:47.813622] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:4d45009d cdw11:45004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.992 [2024-06-10 11:35:47.813646] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:03.992 [2024-06-10 11:35:47.813679] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:45450045 cdw11:00004500 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.992 [2024-06-10 11:35:47.813695] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:03.992 [2024-06-10 11:35:47.813724] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:4500004b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.992 [2024-06-10 11:35:47.813741] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:03.992 [2024-06-10 11:35:47.813770] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:45450045 cdw11:45004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.992 [2024-06-10 11:35:47.813785] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:03.992 #27 NEW cov: 12091 ft: 14298 corp: 18/404b lim: 35 exec/s: 27 rss: 73Mb L: 33/33 MS: 1 CrossOver- 00:25:03.992 [2024-06-10 11:35:47.863611] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:4d45009d cdw11:45004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.992 [2024-06-10 11:35:47.863640] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:03.992 [2024-06-10 11:35:47.863688] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:45450045 cdw11:bb004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.992 [2024-06-10 11:35:47.863704] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:03.992 #28 NEW cov: 12091 ft: 14308 corp: 19/424b lim: 35 exec/s: 28 rss: 73Mb L: 20/33 MS: 1 InsertByte- 00:25:03.992 [2024-06-10 11:35:47.913639] ctrlr.c:2708:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:25:03.992 [2024-06-10 11:35:47.913820] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:4500009d cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.992 [2024-06-10 11:35:47.913843] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:03.992 [2024-06-10 11:35:47.913874] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:45004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.992 [2024-06-10 11:35:47.913891] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:03.992 [2024-06-10 11:35:47.913924] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:45450045 cdw11:45004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.992 [2024-06-10 11:35:47.913940] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:04.252 #29 NEW cov: 12091 ft: 14349 corp: 20/451b lim: 35 exec/s: 29 rss: 73Mb L: 27/33 MS: 1 ShuffleBytes- 00:25:04.252 [2024-06-10 11:35:47.994084] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:4d45009d cdw11:45004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.252 [2024-06-10 11:35:47.994113] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:04.252 [2024-06-10 11:35:47.994161] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:45450045 cdw11:45004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.252 [2024-06-10 11:35:47.994177] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:04.252 [2024-06-10 11:35:47.994206] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:45f10045 cdw11:f100f1f1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.252 [2024-06-10 11:35:47.994222] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:04.252 [2024-06-10 11:35:47.994257] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:f1f100f1 cdw11:f100f0f1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.252 [2024-06-10 11:35:47.994273] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:04.252 #30 NEW cov: 12091 ft: 14364 corp: 21/483b lim: 35 exec/s: 30 rss: 73Mb L: 32/33 MS: 1 ChangeBit- 00:25:04.252 [2024-06-10 11:35:48.044039] ctrlr.c:2708:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:25:04.252 [2024-06-10 11:35:48.044178] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:4d45009d cdw11:45004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.252 [2024-06-10 11:35:48.044201] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:04.252 [2024-06-10 11:35:48.044232] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.252 [2024-06-10 11:35:48.044256] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:04.252 #31 NEW cov: 12091 ft: 14384 corp: 22/503b lim: 35 exec/s: 31 rss: 73Mb L: 20/33 MS: 1 PersAutoDict- DE: "\000\000\000\000\000\000\000\000"- 00:25:04.252 [2024-06-10 11:35:48.124336] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:4d45009d cdw11:45004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.252 [2024-06-10 11:35:48.124375] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:04.252 [2024-06-10 11:35:48.124423] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:45450045 cdw11:bb004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.252 [2024-06-10 11:35:48.124439] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:04.252 #32 NEW cov: 12091 ft: 14388 corp: 23/522b lim: 35 exec/s: 32 rss: 73Mb L: 19/33 MS: 1 ShuffleBytes- 00:25:04.252 [2024-06-10 11:35:48.174439] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:4d45009d cdw11:45004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.252 [2024-06-10 11:35:48.174470] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:04.252 [2024-06-10 11:35:48.174502] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:45450045 cdw11:bb004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.252 [2024-06-10 11:35:48.174522] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:04.513 #33 NEW cov: 12091 ft: 14412 corp: 24/541b lim: 35 exec/s: 33 rss: 73Mb L: 19/33 MS: 1 ChangeByte- 00:25:04.513 [2024-06-10 11:35:48.224677] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:4d45009d cdw11:45004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.513 [2024-06-10 11:35:48.224708] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:04.513 [2024-06-10 11:35:48.224740] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:45450045 cdw11:9d004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.513 [2024-06-10 11:35:48.224756] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:04.513 [2024-06-10 11:35:48.224785] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:45490045 cdw11:45004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.513 [2024-06-10 11:35:48.224801] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:04.513 [2024-06-10 11:35:48.224829] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:45450065 cdw11:45004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.513 [2024-06-10 11:35:48.224844] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:04.513 #34 NEW cov: 12091 ft: 14455 corp: 25/571b lim: 35 exec/s: 34 rss: 73Mb L: 30/33 MS: 1 ChangeBit- 00:25:04.513 [2024-06-10 11:35:48.304729] ctrlr.c:2708:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:25:04.513 [2024-06-10 11:35:48.304899] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:4d45009d cdw11:45004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.513 [2024-06-10 11:35:48.304923] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:04.513 [2024-06-10 11:35:48.304955] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:4b450000 cdw11:45004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.513 [2024-06-10 11:35:48.304972] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:04.513 [2024-06-10 11:35:48.305001] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:45450045 cdw11:45004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.513 [2024-06-10 11:35:48.305016] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:04.513 #35 NEW cov: 12098 ft: 14492 corp: 26/594b lim: 35 exec/s: 35 rss: 74Mb L: 23/33 MS: 1 EraseBytes- 00:25:04.513 [2024-06-10 11:35:48.384988] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:4d45009d cdw11:45003245 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.513 [2024-06-10 11:35:48.385018] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:04.513 [2024-06-10 11:35:48.385065] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:45450045 cdw11:45004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.513 [2024-06-10 11:35:48.385081] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:04.513 #36 NEW cov: 12098 ft: 14500 corp: 27/614b lim: 35 exec/s: 18 rss: 74Mb L: 20/33 MS: 1 InsertByte- 00:25:04.513 #36 DONE cov: 12098 ft: 14500 corp: 27/614b lim: 35 exec/s: 18 rss: 74Mb 00:25:04.513 ###### Recommended dictionary. ###### 00:25:04.513 "\000\000\000\000\000\000\000\000" # Uses: 1 00:25:04.513 ###### End of recommended dictionary. ###### 00:25:04.513 Done 36 runs in 2 second(s) 00:25:04.773 11:35:48 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_2.conf /var/tmp/suppress_nvmf_fuzz 00:25:04.773 11:35:48 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:25:04.773 11:35:48 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:25:04.773 11:35:48 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 3 1 0x1 00:25:04.773 11:35:48 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=3 00:25:04.773 11:35:48 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:25:04.773 11:35:48 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:25:04.773 11:35:48 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 00:25:04.773 11:35:48 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_3.conf 00:25:04.773 11:35:48 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:25:04.773 11:35:48 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:25:04.773 11:35:48 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 3 00:25:04.773 11:35:48 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4403 00:25:04.773 11:35:48 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 00:25:04.773 11:35:48 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4403' 00:25:04.773 11:35:48 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4403"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:25:04.773 11:35:48 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:25:04.773 11:35:48 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:25:04.773 11:35:48 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4403' -c /tmp/fuzz_json_3.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 -Z 3 00:25:04.773 [2024-06-10 11:35:48.598954] Starting SPDK v24.09-pre git sha1 a3f6419f1 / DPDK 24.03.0 initialization... 00:25:04.773 [2024-06-10 11:35:48.599043] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3329328 ] 00:25:04.773 EAL: No free 2048 kB hugepages reported on node 1 00:25:05.032 [2024-06-10 11:35:48.922544] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:05.291 [2024-06-10 11:35:49.017762] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:25:05.291 [2024-06-10 11:35:49.078112] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:05.291 [2024-06-10 11:35:49.094374] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4403 *** 00:25:05.291 INFO: Running with entropic power schedule (0xFF, 100). 00:25:05.291 INFO: Seed: 2073882045 00:25:05.291 INFO: Loaded 1 modules (357716 inline 8-bit counters): 357716 [0x29a784c, 0x29feda0), 00:25:05.291 INFO: Loaded 1 PC tables (357716 PCs): 357716 [0x29feda0,0x2f742e0), 00:25:05.291 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 00:25:05.291 INFO: A corpus is not provided, starting from an empty corpus 00:25:05.291 #2 INITED exec/s: 0 rss: 65Mb 00:25:05.291 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:25:05.291 This may also happen if the target rejected all inputs we tried so far 00:25:05.551 NEW_FUNC[1/675]: 0x487f00 in fuzz_admin_abort_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:114 00:25:05.551 NEW_FUNC[2/675]: 0x4bf580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:25:05.551 #5 NEW cov: 11741 ft: 11742 corp: 2/6b lim: 20 exec/s: 0 rss: 72Mb L: 5/5 MS: 3 ChangeByte-CopyPart-CMP- DE: "\021\001\000\000"- 00:25:05.809 #11 NEW cov: 11871 ft: 12274 corp: 3/12b lim: 20 exec/s: 0 rss: 72Mb L: 6/6 MS: 1 CrossOver- 00:25:05.810 #17 NEW cov: 11877 ft: 12621 corp: 4/17b lim: 20 exec/s: 0 rss: 72Mb L: 5/6 MS: 1 PersAutoDict- DE: "\021\001\000\000"- 00:25:05.810 NEW_FUNC[1/4]: 0x11d32b0 in nvmf_qpair_abort_request /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/ctrlr.c:3337 00:25:05.810 NEW_FUNC[2/4]: 0x11d3e30 in nvmf_qpair_abort_aer /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/ctrlr.c:3279 00:25:05.810 #18 NEW cov: 12020 ft: 12916 corp: 5/22b lim: 20 exec/s: 0 rss: 72Mb L: 5/6 MS: 1 ShuffleBytes- 00:25:05.810 #19 NEW cov: 12020 ft: 13058 corp: 6/28b lim: 20 exec/s: 0 rss: 72Mb L: 6/6 MS: 1 InsertByte- 00:25:05.810 #20 NEW cov: 12020 ft: 13117 corp: 7/35b lim: 20 exec/s: 0 rss: 72Mb L: 7/7 MS: 1 CrossOver- 00:25:05.810 #21 NEW cov: 12020 ft: 13203 corp: 8/41b lim: 20 exec/s: 0 rss: 72Mb L: 6/7 MS: 1 ChangeBinInt- 00:25:06.068 #22 NEW cov: 12020 ft: 13231 corp: 9/46b lim: 20 exec/s: 0 rss: 72Mb L: 5/7 MS: 1 ChangeByte- 00:25:06.068 #26 NEW cov: 12034 ft: 13508 corp: 10/55b lim: 20 exec/s: 0 rss: 72Mb L: 9/9 MS: 4 InsertByte-EraseBytes-ChangeBit-CMP- DE: "\377\015\335LI\206Gz"- 00:25:06.068 #27 NEW cov: 12034 ft: 13616 corp: 11/63b lim: 20 exec/s: 0 rss: 73Mb L: 8/9 MS: 1 CrossOver- 00:25:06.068 #28 NEW cov: 12034 ft: 13654 corp: 12/72b lim: 20 exec/s: 0 rss: 73Mb L: 9/9 MS: 1 CopyPart- 00:25:06.068 #29 NEW cov: 12051 ft: 14064 corp: 13/89b lim: 20 exec/s: 0 rss: 73Mb L: 17/17 MS: 1 CrossOver- 00:25:06.327 NEW_FUNC[1/1]: 0x1a77150 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:601 00:25:06.327 #30 NEW cov: 12074 ft: 14130 corp: 14/108b lim: 20 exec/s: 0 rss: 73Mb L: 19/19 MS: 1 CopyPart- 00:25:06.327 #31 NEW cov: 12074 ft: 14173 corp: 15/117b lim: 20 exec/s: 0 rss: 73Mb L: 9/19 MS: 1 ShuffleBytes- 00:25:06.327 #32 NEW cov: 12074 ft: 14195 corp: 16/126b lim: 20 exec/s: 32 rss: 73Mb L: 9/19 MS: 1 ChangeBinInt- 00:25:06.327 #33 NEW cov: 12074 ft: 14216 corp: 17/135b lim: 20 exec/s: 33 rss: 73Mb L: 9/19 MS: 1 CopyPart- 00:25:06.327 #34 NEW cov: 12074 ft: 14236 corp: 18/142b lim: 20 exec/s: 34 rss: 73Mb L: 7/19 MS: 1 InsertByte- 00:25:06.327 #35 NEW cov: 12074 ft: 14243 corp: 19/148b lim: 20 exec/s: 35 rss: 73Mb L: 6/19 MS: 1 ChangeBinInt- 00:25:06.586 #36 NEW cov: 12078 ft: 14348 corp: 20/161b lim: 20 exec/s: 36 rss: 73Mb L: 13/19 MS: 1 CopyPart- 00:25:06.586 #37 NEW cov: 12078 ft: 14374 corp: 21/168b lim: 20 exec/s: 37 rss: 73Mb L: 7/19 MS: 1 InsertByte- 00:25:06.586 #38 NEW cov: 12078 ft: 14385 corp: 22/179b lim: 20 exec/s: 38 rss: 73Mb L: 11/19 MS: 1 InsertRepeatedBytes- 00:25:06.586 #39 NEW cov: 12078 ft: 14397 corp: 23/189b lim: 20 exec/s: 39 rss: 73Mb L: 10/19 MS: 1 InsertByte- 00:25:06.586 #40 NEW cov: 12078 ft: 14411 corp: 24/193b lim: 20 exec/s: 40 rss: 73Mb L: 4/19 MS: 1 EraseBytes- 00:25:06.586 #41 NEW cov: 12078 ft: 14428 corp: 25/198b lim: 20 exec/s: 41 rss: 73Mb L: 5/19 MS: 1 ChangeBinInt- 00:25:06.845 #42 NEW cov: 12103 ft: 14481 corp: 26/211b lim: 20 exec/s: 42 rss: 73Mb L: 13/19 MS: 1 PersAutoDict- DE: "\021\001\000\000"- 00:25:06.845 #43 NEW cov: 12103 ft: 14551 corp: 27/218b lim: 20 exec/s: 43 rss: 73Mb L: 7/19 MS: 1 CrossOver- 00:25:06.845 #44 NEW cov: 12103 ft: 14563 corp: 28/226b lim: 20 exec/s: 44 rss: 74Mb L: 8/19 MS: 1 CopyPart- 00:25:06.845 #45 NEW cov: 12103 ft: 14566 corp: 29/231b lim: 20 exec/s: 45 rss: 74Mb L: 5/19 MS: 1 PersAutoDict- DE: "\021\001\000\000"- 00:25:06.845 #46 NEW cov: 12103 ft: 14589 corp: 30/250b lim: 20 exec/s: 46 rss: 74Mb L: 19/19 MS: 1 PersAutoDict- DE: "\377\015\335LI\206Gz"- 00:25:06.845 #47 NEW cov: 12103 ft: 14605 corp: 31/258b lim: 20 exec/s: 47 rss: 74Mb L: 8/19 MS: 1 ChangeBit- 00:25:07.104 #48 NEW cov: 12103 ft: 14618 corp: 32/267b lim: 20 exec/s: 48 rss: 74Mb L: 9/19 MS: 1 ShuffleBytes- 00:25:07.104 #49 NEW cov: 12103 ft: 14619 corp: 33/279b lim: 20 exec/s: 49 rss: 74Mb L: 12/19 MS: 1 InsertRepeatedBytes- 00:25:07.104 #50 NEW cov: 12103 ft: 14622 corp: 34/284b lim: 20 exec/s: 50 rss: 74Mb L: 5/19 MS: 1 EraseBytes- 00:25:07.104 #51 NEW cov: 12103 ft: 14687 corp: 35/289b lim: 20 exec/s: 51 rss: 74Mb L: 5/19 MS: 1 ChangeByte- 00:25:07.104 #52 NEW cov: 12103 ft: 14701 corp: 36/295b lim: 20 exec/s: 52 rss: 74Mb L: 6/19 MS: 1 ChangeByte- 00:25:07.363 #53 NEW cov: 12103 ft: 14704 corp: 37/303b lim: 20 exec/s: 53 rss: 74Mb L: 8/19 MS: 1 ChangeByte- 00:25:07.363 #54 NEW cov: 12103 ft: 14710 corp: 38/311b lim: 20 exec/s: 54 rss: 74Mb L: 8/19 MS: 1 ChangeBit- 00:25:07.363 #55 NEW cov: 12103 ft: 14720 corp: 39/315b lim: 20 exec/s: 27 rss: 74Mb L: 4/19 MS: 1 EraseBytes- 00:25:07.363 #55 DONE cov: 12103 ft: 14720 corp: 39/315b lim: 20 exec/s: 27 rss: 74Mb 00:25:07.363 ###### Recommended dictionary. ###### 00:25:07.363 "\021\001\000\000" # Uses: 3 00:25:07.363 "\377\015\335LI\206Gz" # Uses: 1 00:25:07.363 ###### End of recommended dictionary. ###### 00:25:07.363 Done 55 runs in 2 second(s) 00:25:07.363 11:35:51 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_3.conf /var/tmp/suppress_nvmf_fuzz 00:25:07.363 11:35:51 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:25:07.363 11:35:51 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:25:07.363 11:35:51 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 4 1 0x1 00:25:07.363 11:35:51 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=4 00:25:07.363 11:35:51 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:25:07.363 11:35:51 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:25:07.363 11:35:51 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 00:25:07.363 11:35:51 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_4.conf 00:25:07.363 11:35:51 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:25:07.363 11:35:51 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:25:07.363 11:35:51 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 4 00:25:07.363 11:35:51 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4404 00:25:07.363 11:35:51 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 00:25:07.363 11:35:51 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4404' 00:25:07.363 11:35:51 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4404"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:25:07.363 11:35:51 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:25:07.363 11:35:51 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:25:07.363 11:35:51 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4404' -c /tmp/fuzz_json_4.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 -Z 4 00:25:07.623 [2024-06-10 11:35:51.324615] Starting SPDK v24.09-pre git sha1 a3f6419f1 / DPDK 24.03.0 initialization... 00:25:07.623 [2024-06-10 11:35:51.324696] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3329699 ] 00:25:07.623 EAL: No free 2048 kB hugepages reported on node 1 00:25:07.881 [2024-06-10 11:35:51.635423] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:07.881 [2024-06-10 11:35:51.737326] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:25:07.881 [2024-06-10 11:35:51.797671] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:07.881 [2024-06-10 11:35:51.813926] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4404 *** 00:25:07.881 INFO: Running with entropic power schedule (0xFF, 100). 00:25:07.881 INFO: Seed: 497917579 00:25:08.140 INFO: Loaded 1 modules (357716 inline 8-bit counters): 357716 [0x29a784c, 0x29feda0), 00:25:08.140 INFO: Loaded 1 PC tables (357716 PCs): 357716 [0x29feda0,0x2f742e0), 00:25:08.140 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 00:25:08.140 INFO: A corpus is not provided, starting from an empty corpus 00:25:08.140 #2 INITED exec/s: 0 rss: 65Mb 00:25:08.140 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:25:08.140 This may also happen if the target rejected all inputs we tried so far 00:25:08.140 [2024-06-10 11:35:51.863350] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00002d0a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.140 [2024-06-10 11:35:51.863384] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:08.140 [2024-06-10 11:35:51.863437] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.140 [2024-06-10 11:35:51.863451] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:08.140 [2024-06-10 11:35:51.863503] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.140 [2024-06-10 11:35:51.863517] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:08.140 [2024-06-10 11:35:51.863568] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.140 [2024-06-10 11:35:51.863582] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:08.399 NEW_FUNC[1/687]: 0x488ff0 in fuzz_admin_create_io_completion_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:126 00:25:08.399 NEW_FUNC[2/687]: 0x4bf580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:25:08.399 #20 NEW cov: 11865 ft: 11866 corp: 2/30b lim: 35 exec/s: 0 rss: 72Mb L: 29/29 MS: 3 InsertByte-InsertByte-InsertRepeatedBytes- 00:25:08.399 [2024-06-10 11:35:52.204065] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00002d0a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.399 [2024-06-10 11:35:52.204112] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:08.399 [2024-06-10 11:35:52.204174] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.399 [2024-06-10 11:35:52.204191] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:08.399 [2024-06-10 11:35:52.204251] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.399 [2024-06-10 11:35:52.204269] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:08.399 [2024-06-10 11:35:52.204326] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:0000002b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.399 [2024-06-10 11:35:52.204342] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:08.399 #21 NEW cov: 11995 ft: 12460 corp: 3/60b lim: 35 exec/s: 0 rss: 72Mb L: 30/30 MS: 1 InsertByte- 00:25:08.399 [2024-06-10 11:35:52.263741] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00002d0a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.399 [2024-06-10 11:35:52.263770] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:08.399 [2024-06-10 11:35:52.263823] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.399 [2024-06-10 11:35:52.263837] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:08.399 #22 NEW cov: 12001 ft: 13086 corp: 4/79b lim: 35 exec/s: 0 rss: 72Mb L: 19/30 MS: 1 EraseBytes- 00:25:08.399 [2024-06-10 11:35:52.304199] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00002d0a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.399 [2024-06-10 11:35:52.304227] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:08.399 [2024-06-10 11:35:52.304285] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.399 [2024-06-10 11:35:52.304299] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:08.399 [2024-06-10 11:35:52.304351] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00ff0000 cdw11:ffff0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.399 [2024-06-10 11:35:52.304364] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:08.399 [2024-06-10 11:35:52.304416] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:002b0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.399 [2024-06-10 11:35:52.304429] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:08.399 #23 NEW cov: 12086 ft: 13357 corp: 5/113b lim: 35 exec/s: 0 rss: 72Mb L: 34/34 MS: 1 CMP- DE: "\377\377\377\020"- 00:25:08.658 [2024-06-10 11:35:52.354348] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00002d0a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.658 [2024-06-10 11:35:52.354373] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:08.658 [2024-06-10 11:35:52.354426] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.658 [2024-06-10 11:35:52.354440] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:08.658 [2024-06-10 11:35:52.354490] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.658 [2024-06-10 11:35:52.354503] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:08.658 [2024-06-10 11:35:52.354552] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:0000002b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.658 [2024-06-10 11:35:52.354565] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:08.658 #24 NEW cov: 12086 ft: 13513 corp: 6/143b lim: 35 exec/s: 0 rss: 72Mb L: 30/34 MS: 1 ChangeBinInt- 00:25:08.658 [2024-06-10 11:35:52.394129] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00002d0a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.658 [2024-06-10 11:35:52.394153] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:08.658 [2024-06-10 11:35:52.394223] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.659 [2024-06-10 11:35:52.394236] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:08.659 #25 NEW cov: 12086 ft: 13602 corp: 7/163b lim: 35 exec/s: 0 rss: 72Mb L: 20/34 MS: 1 EraseBytes- 00:25:08.659 [2024-06-10 11:35:52.444604] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffff00ff cdw11:51510002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.659 [2024-06-10 11:35:52.444629] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:08.659 [2024-06-10 11:35:52.444684] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:51515151 cdw11:51510002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.659 [2024-06-10 11:35:52.444702] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:08.659 [2024-06-10 11:35:52.444754] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:51515151 cdw11:51510002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.659 [2024-06-10 11:35:52.444768] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:08.659 [2024-06-10 11:35:52.444819] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:51515151 cdw11:51510002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.659 [2024-06-10 11:35:52.444832] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:08.659 #29 NEW cov: 12086 ft: 13687 corp: 8/194b lim: 35 exec/s: 0 rss: 72Mb L: 31/34 MS: 4 CrossOver-InsertByte-PersAutoDict-InsertRepeatedBytes- DE: "\377\377\377\020"- 00:25:08.659 [2024-06-10 11:35:52.484411] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00002d0a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.659 [2024-06-10 11:35:52.484436] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:08.659 [2024-06-10 11:35:52.484490] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.659 [2024-06-10 11:35:52.484504] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:08.659 #30 NEW cov: 12086 ft: 13713 corp: 9/210b lim: 35 exec/s: 0 rss: 72Mb L: 16/34 MS: 1 CrossOver- 00:25:08.659 [2024-06-10 11:35:52.524817] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00ff0000 cdw11:ffff0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.659 [2024-06-10 11:35:52.524841] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:08.659 [2024-06-10 11:35:52.524893] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:51515151 cdw11:51510002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.659 [2024-06-10 11:35:52.524907] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:08.659 [2024-06-10 11:35:52.524956] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:51515151 cdw11:51510002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.659 [2024-06-10 11:35:52.524970] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:08.659 [2024-06-10 11:35:52.525020] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:51515151 cdw11:51510002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.659 [2024-06-10 11:35:52.525049] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:08.659 #31 NEW cov: 12086 ft: 13786 corp: 10/243b lim: 35 exec/s: 0 rss: 72Mb L: 33/34 MS: 1 CrossOver- 00:25:08.659 [2024-06-10 11:35:52.574644] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00002d0a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.659 [2024-06-10 11:35:52.574668] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:08.659 [2024-06-10 11:35:52.574724] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.659 [2024-06-10 11:35:52.574737] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:08.918 #32 NEW cov: 12086 ft: 13813 corp: 11/263b lim: 35 exec/s: 0 rss: 73Mb L: 20/34 MS: 1 ShuffleBytes- 00:25:08.918 [2024-06-10 11:35:52.624998] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00002d0a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.918 [2024-06-10 11:35:52.625024] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:08.918 [2024-06-10 11:35:52.625093] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.918 [2024-06-10 11:35:52.625107] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:08.918 [2024-06-10 11:35:52.625159] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.918 [2024-06-10 11:35:52.625172] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:08.918 #33 NEW cov: 12086 ft: 14039 corp: 12/286b lim: 35 exec/s: 0 rss: 73Mb L: 23/34 MS: 1 InsertRepeatedBytes- 00:25:08.918 [2024-06-10 11:35:52.675293] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffff00ff cdw11:51510002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.918 [2024-06-10 11:35:52.675320] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:08.918 [2024-06-10 11:35:52.675374] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:51515151 cdw11:51510002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.918 [2024-06-10 11:35:52.675388] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:08.918 [2024-06-10 11:35:52.675438] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:51515151 cdw11:51510002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.918 [2024-06-10 11:35:52.675451] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:08.918 [2024-06-10 11:35:52.675504] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:51515151 cdw11:51510002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.918 [2024-06-10 11:35:52.675517] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:08.918 #34 NEW cov: 12086 ft: 14086 corp: 13/320b lim: 35 exec/s: 0 rss: 73Mb L: 34/34 MS: 1 CopyPart- 00:25:08.918 [2024-06-10 11:35:52.715229] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:dd4dff0d cdw11:d6c10000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.918 [2024-06-10 11:35:52.715260] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:08.918 [2024-06-10 11:35:52.715331] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:0a00322d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.919 [2024-06-10 11:35:52.715345] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:08.919 [2024-06-10 11:35:52.715399] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.919 [2024-06-10 11:35:52.715413] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:08.919 #35 NEW cov: 12086 ft: 14096 corp: 14/344b lim: 35 exec/s: 0 rss: 73Mb L: 24/34 MS: 1 CMP- DE: "\377\015\335M\326\301'2"- 00:25:08.919 [2024-06-10 11:35:52.755478] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00002d0a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.919 [2024-06-10 11:35:52.755507] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:08.919 [2024-06-10 11:35:52.755575] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.919 [2024-06-10 11:35:52.755590] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:08.919 [2024-06-10 11:35:52.755642] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00ff0000 cdw11:ffff0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.919 [2024-06-10 11:35:52.755656] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:08.919 [2024-06-10 11:35:52.755705] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:002b0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.919 [2024-06-10 11:35:52.755719] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:08.919 NEW_FUNC[1/1]: 0x1a77150 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:601 00:25:08.919 #36 NEW cov: 12109 ft: 14175 corp: 15/378b lim: 35 exec/s: 0 rss: 73Mb L: 34/34 MS: 1 ChangeBit- 00:25:08.919 [2024-06-10 11:35:52.805432] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00002d0a cdw11:01000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.919 [2024-06-10 11:35:52.805458] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:08.919 [2024-06-10 11:35:52.805509] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.919 [2024-06-10 11:35:52.805524] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:08.919 [2024-06-10 11:35:52.805575] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.919 [2024-06-10 11:35:52.805604] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:08.919 #37 NEW cov: 12109 ft: 14205 corp: 16/401b lim: 35 exec/s: 0 rss: 73Mb L: 23/34 MS: 1 ChangeBit- 00:25:08.919 [2024-06-10 11:35:52.855782] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00002d0a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.919 [2024-06-10 11:35:52.855808] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:08.919 [2024-06-10 11:35:52.855860] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:fc000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.919 [2024-06-10 11:35:52.855875] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:08.919 [2024-06-10 11:35:52.855928] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00ff0000 cdw11:ffff0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.919 [2024-06-10 11:35:52.855942] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:08.919 [2024-06-10 11:35:52.855992] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:002b0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.919 [2024-06-10 11:35:52.856006] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:09.178 #38 NEW cov: 12109 ft: 14236 corp: 17/435b lim: 35 exec/s: 38 rss: 73Mb L: 34/34 MS: 1 ChangeBinInt- 00:25:09.178 [2024-06-10 11:35:52.895559] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00002d0a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.178 [2024-06-10 11:35:52.895588] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:09.178 [2024-06-10 11:35:52.895656] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.178 [2024-06-10 11:35:52.895670] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:09.178 #39 NEW cov: 12109 ft: 14258 corp: 18/452b lim: 35 exec/s: 39 rss: 73Mb L: 17/34 MS: 1 EraseBytes- 00:25:09.178 [2024-06-10 11:35:52.945995] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffff2d0a cdw11:ff100000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.178 [2024-06-10 11:35:52.946020] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:09.178 [2024-06-10 11:35:52.946089] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.178 [2024-06-10 11:35:52.946104] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:09.178 [2024-06-10 11:35:52.946156] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00ff0000 cdw11:ffff0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.178 [2024-06-10 11:35:52.946170] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:09.178 [2024-06-10 11:35:52.946222] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:002b0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.178 [2024-06-10 11:35:52.946236] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:09.178 #40 NEW cov: 12109 ft: 14269 corp: 19/486b lim: 35 exec/s: 40 rss: 73Mb L: 34/34 MS: 1 PersAutoDict- DE: "\377\377\377\020"- 00:25:09.178 [2024-06-10 11:35:52.986104] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00002d0a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.178 [2024-06-10 11:35:52.986128] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:09.178 [2024-06-10 11:35:52.986197] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.178 [2024-06-10 11:35:52.986211] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:09.178 [2024-06-10 11:35:52.986267] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.178 [2024-06-10 11:35:52.986281] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:09.178 [2024-06-10 11:35:52.986336] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00ff002b cdw11:ffff0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.178 [2024-06-10 11:35:52.986349] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:09.178 #41 NEW cov: 12109 ft: 14292 corp: 20/520b lim: 35 exec/s: 41 rss: 73Mb L: 34/34 MS: 1 PersAutoDict- DE: "\377\377\377\020"- 00:25:09.178 [2024-06-10 11:35:53.026101] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.178 [2024-06-10 11:35:53.026125] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:09.178 [2024-06-10 11:35:53.026177] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.178 [2024-06-10 11:35:53.026193] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:09.178 [2024-06-10 11:35:53.026249] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.178 [2024-06-10 11:35:53.026263] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:09.178 #42 NEW cov: 12109 ft: 14300 corp: 21/546b lim: 35 exec/s: 42 rss: 73Mb L: 26/34 MS: 1 InsertRepeatedBytes- 00:25:09.178 [2024-06-10 11:35:53.076067] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00002d0a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.178 [2024-06-10 11:35:53.076091] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:09.178 [2024-06-10 11:35:53.076164] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.178 [2024-06-10 11:35:53.076178] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:09.178 #43 NEW cov: 12109 ft: 14311 corp: 22/562b lim: 35 exec/s: 43 rss: 73Mb L: 16/34 MS: 1 ChangeByte- 00:25:09.178 [2024-06-10 11:35:53.116545] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00002d0a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.178 [2024-06-10 11:35:53.116570] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:09.178 [2024-06-10 11:35:53.116625] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.178 [2024-06-10 11:35:53.116639] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:09.179 [2024-06-10 11:35:53.116691] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.179 [2024-06-10 11:35:53.116704] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:09.179 [2024-06-10 11:35:53.116756] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00ff002b cdw11:ffff0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.179 [2024-06-10 11:35:53.116768] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:09.438 #44 NEW cov: 12109 ft: 14321 corp: 23/596b lim: 35 exec/s: 44 rss: 73Mb L: 34/34 MS: 1 ChangeByte- 00:25:09.438 [2024-06-10 11:35:53.166340] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ff0d2d0a cdw11:dd4d0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.438 [2024-06-10 11:35:53.166365] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:09.438 [2024-06-10 11:35:53.166420] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:3200c127 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.438 [2024-06-10 11:35:53.166434] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:09.438 #45 NEW cov: 12109 ft: 14329 corp: 24/612b lim: 35 exec/s: 45 rss: 73Mb L: 16/34 MS: 1 PersAutoDict- DE: "\377\015\335M\326\301'2"- 00:25:09.438 [2024-06-10 11:35:53.206754] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00002d0a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.438 [2024-06-10 11:35:53.206779] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:09.438 [2024-06-10 11:35:53.206838] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:fc000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.438 [2024-06-10 11:35:53.206852] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:09.438 [2024-06-10 11:35:53.206903] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00ff0000 cdw11:ffff0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.438 [2024-06-10 11:35:53.206916] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:09.438 [2024-06-10 11:35:53.206967] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:002f0000 cdw11:002b0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.438 [2024-06-10 11:35:53.206981] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:09.438 #46 NEW cov: 12109 ft: 14345 corp: 25/646b lim: 35 exec/s: 46 rss: 73Mb L: 34/34 MS: 1 ChangeByte- 00:25:09.438 [2024-06-10 11:35:53.256764] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00002d0a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.438 [2024-06-10 11:35:53.256790] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:09.438 [2024-06-10 11:35:53.256846] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.438 [2024-06-10 11:35:53.256860] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:09.438 [2024-06-10 11:35:53.256911] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.438 [2024-06-10 11:35:53.256924] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:09.438 #47 NEW cov: 12109 ft: 14391 corp: 26/669b lim: 35 exec/s: 47 rss: 73Mb L: 23/34 MS: 1 EraseBytes- 00:25:09.438 [2024-06-10 11:35:53.296990] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00002d0a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.438 [2024-06-10 11:35:53.297014] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:09.438 [2024-06-10 11:35:53.297067] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.438 [2024-06-10 11:35:53.297081] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:09.438 [2024-06-10 11:35:53.297133] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.438 [2024-06-10 11:35:53.297145] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:09.438 [2024-06-10 11:35:53.297195] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.438 [2024-06-10 11:35:53.297208] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:09.438 #48 NEW cov: 12109 ft: 14397 corp: 27/703b lim: 35 exec/s: 48 rss: 73Mb L: 34/34 MS: 1 CopyPart- 00:25:09.438 [2024-06-10 11:35:53.337140] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffff00ff cdw11:51000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.438 [2024-06-10 11:35:53.337165] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:09.438 [2024-06-10 11:35:53.337236] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:be18dd52 cdw11:2b0a0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.438 [2024-06-10 11:35:53.337255] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:09.438 [2024-06-10 11:35:53.337310] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:51515151 cdw11:51510002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.438 [2024-06-10 11:35:53.337323] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:09.439 [2024-06-10 11:35:53.337378] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:51515151 cdw11:51510002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.439 [2024-06-10 11:35:53.337391] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:09.439 #49 NEW cov: 12109 ft: 14438 corp: 28/734b lim: 35 exec/s: 49 rss: 74Mb L: 31/34 MS: 1 CMP- DE: "\000\016\335R\276\030+\012"- 00:25:09.439 [2024-06-10 11:35:53.377256] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffff00ff cdw11:51000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.439 [2024-06-10 11:35:53.377281] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:09.439 [2024-06-10 11:35:53.377336] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:be18dd52 cdw11:2b0a0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.439 [2024-06-10 11:35:53.377350] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:09.439 [2024-06-10 11:35:53.377403] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:40515151 cdw11:51510002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.439 [2024-06-10 11:35:53.377416] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:09.439 [2024-06-10 11:35:53.377468] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:51515151 cdw11:51510002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.439 [2024-06-10 11:35:53.377481] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:09.698 #50 NEW cov: 12109 ft: 14460 corp: 29/765b lim: 35 exec/s: 50 rss: 74Mb L: 31/34 MS: 1 ChangeByte- 00:25:09.698 [2024-06-10 11:35:53.427396] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffff00ff cdw11:51000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.698 [2024-06-10 11:35:53.427421] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:09.698 [2024-06-10 11:35:53.427475] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:be18dd52 cdw11:2b0a0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.698 [2024-06-10 11:35:53.427489] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:09.698 [2024-06-10 11:35:53.427544] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:40515151 cdw11:51510002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.698 [2024-06-10 11:35:53.427557] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:09.698 [2024-06-10 11:35:53.427610] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:51515151 cdw11:51510002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.698 [2024-06-10 11:35:53.427623] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:09.698 #51 NEW cov: 12109 ft: 14482 corp: 30/796b lim: 35 exec/s: 51 rss: 74Mb L: 31/34 MS: 1 ShuffleBytes- 00:25:09.698 [2024-06-10 11:35:53.477686] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00002d0a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.698 [2024-06-10 11:35:53.477711] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:09.698 [2024-06-10 11:35:53.477782] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.698 [2024-06-10 11:35:53.477796] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:09.698 [2024-06-10 11:35:53.477850] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00ff0000 cdw11:a1ff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.699 [2024-06-10 11:35:53.477863] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:09.699 [2024-06-10 11:35:53.477915] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00001000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.699 [2024-06-10 11:35:53.477928] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:09.699 [2024-06-10 11:35:53.477979] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:01000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.699 [2024-06-10 11:35:53.477993] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:09.699 #52 NEW cov: 12109 ft: 14528 corp: 31/831b lim: 35 exec/s: 52 rss: 74Mb L: 35/35 MS: 1 InsertByte- 00:25:09.699 [2024-06-10 11:35:53.517493] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00002d0a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.699 [2024-06-10 11:35:53.517518] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:09.699 [2024-06-10 11:35:53.517588] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.699 [2024-06-10 11:35:53.517601] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:09.699 [2024-06-10 11:35:53.517654] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000700 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.699 [2024-06-10 11:35:53.517667] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:09.699 #53 NEW cov: 12109 ft: 14535 corp: 32/854b lim: 35 exec/s: 53 rss: 74Mb L: 23/35 MS: 1 ShuffleBytes- 00:25:09.699 [2024-06-10 11:35:53.567827] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00002d0a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.699 [2024-06-10 11:35:53.567852] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:09.699 [2024-06-10 11:35:53.567908] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.699 [2024-06-10 11:35:53.567922] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:09.699 [2024-06-10 11:35:53.567975] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.699 [2024-06-10 11:35:53.567988] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:09.699 [2024-06-10 11:35:53.568044] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00ff002b cdw11:ffff0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.699 [2024-06-10 11:35:53.568057] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:09.699 #54 NEW cov: 12109 ft: 14537 corp: 33/888b lim: 35 exec/s: 54 rss: 74Mb L: 34/35 MS: 1 CrossOver- 00:25:09.699 [2024-06-10 11:35:53.607942] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:f7ff2d0a cdw11:ff100000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.699 [2024-06-10 11:35:53.607966] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:09.699 [2024-06-10 11:35:53.608021] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.699 [2024-06-10 11:35:53.608035] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:09.699 [2024-06-10 11:35:53.608099] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00ff0000 cdw11:ffff0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.699 [2024-06-10 11:35:53.608113] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:09.699 [2024-06-10 11:35:53.608165] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:002b0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.699 [2024-06-10 11:35:53.608178] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:09.699 #55 NEW cov: 12109 ft: 14560 corp: 34/922b lim: 35 exec/s: 55 rss: 74Mb L: 34/35 MS: 1 ChangeBit- 00:25:09.959 [2024-06-10 11:35:53.658027] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffff00ff cdw11:51510002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.959 [2024-06-10 11:35:53.658051] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:09.959 [2024-06-10 11:35:53.658105] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:51515151 cdw11:51510002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.959 [2024-06-10 11:35:53.658118] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:09.959 [2024-06-10 11:35:53.658172] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:51515151 cdw11:51510002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.959 [2024-06-10 11:35:53.658185] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:09.959 [2024-06-10 11:35:53.658235] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:51515151 cdw11:51510002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.959 [2024-06-10 11:35:53.658252] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:09.959 #56 NEW cov: 12109 ft: 14570 corp: 35/956b lim: 35 exec/s: 56 rss: 74Mb L: 34/35 MS: 1 CMP- DE: "\000\000\000\003"- 00:25:09.959 [2024-06-10 11:35:53.707758] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00002d0a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.959 [2024-06-10 11:35:53.707783] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:09.959 #57 NEW cov: 12109 ft: 15291 corp: 36/967b lim: 35 exec/s: 57 rss: 74Mb L: 11/35 MS: 1 EraseBytes- 00:25:09.959 [2024-06-10 11:35:53.748349] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00ff0000 cdw11:ffff0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.959 [2024-06-10 11:35:53.748375] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:09.959 [2024-06-10 11:35:53.748431] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:51515151 cdw11:51510002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.959 [2024-06-10 11:35:53.748445] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:09.959 [2024-06-10 11:35:53.748499] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:51515151 cdw11:51510002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.959 [2024-06-10 11:35:53.748512] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:09.959 [2024-06-10 11:35:53.748564] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:51515151 cdw11:51510002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.959 [2024-06-10 11:35:53.748577] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:09.959 [2024-06-10 11:35:53.798422] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00ff4000 cdw11:ffff0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.959 [2024-06-10 11:35:53.798446] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:09.959 [2024-06-10 11:35:53.798499] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:51515151 cdw11:51510002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.959 [2024-06-10 11:35:53.798513] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:09.959 [2024-06-10 11:35:53.798564] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:51515151 cdw11:51510002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.959 [2024-06-10 11:35:53.798577] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:09.959 [2024-06-10 11:35:53.798630] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:51515151 cdw11:51510002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.959 [2024-06-10 11:35:53.798643] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:09.959 #59 NEW cov: 12109 ft: 15375 corp: 37/997b lim: 35 exec/s: 59 rss: 74Mb L: 30/35 MS: 2 EraseBytes-ChangeBit- 00:25:09.959 [2024-06-10 11:35:53.838527] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00002d0a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.959 [2024-06-10 11:35:53.838552] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:09.959 [2024-06-10 11:35:53.838604] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.959 [2024-06-10 11:35:53.838617] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:09.959 [2024-06-10 11:35:53.838668] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00ff0000 cdw11:ffff0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.959 [2024-06-10 11:35:53.838682] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:09.959 [2024-06-10 11:35:53.838737] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:ffff00ff cdw11:102b0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.959 [2024-06-10 11:35:53.838750] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:09.959 #60 NEW cov: 12109 ft: 15381 corp: 38/1031b lim: 35 exec/s: 30 rss: 74Mb L: 34/35 MS: 1 PersAutoDict- DE: "\377\377\377\020"- 00:25:09.959 #60 DONE cov: 12109 ft: 15381 corp: 38/1031b lim: 35 exec/s: 30 rss: 74Mb 00:25:09.959 ###### Recommended dictionary. ###### 00:25:09.959 "\377\377\377\020" # Uses: 4 00:25:09.959 "\377\015\335M\326\301'2" # Uses: 1 00:25:09.959 "\000\016\335R\276\030+\012" # Uses: 0 00:25:09.959 "\000\000\000\003" # Uses: 0 00:25:09.959 ###### End of recommended dictionary. ###### 00:25:09.959 Done 60 runs in 2 second(s) 00:25:10.219 11:35:53 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_4.conf /var/tmp/suppress_nvmf_fuzz 00:25:10.219 11:35:54 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:25:10.219 11:35:54 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:25:10.219 11:35:54 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 5 1 0x1 00:25:10.219 11:35:54 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=5 00:25:10.219 11:35:54 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:25:10.219 11:35:54 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:25:10.219 11:35:54 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 00:25:10.219 11:35:54 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_5.conf 00:25:10.219 11:35:54 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:25:10.219 11:35:54 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:25:10.219 11:35:54 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 5 00:25:10.219 11:35:54 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4405 00:25:10.219 11:35:54 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 00:25:10.219 11:35:54 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4405' 00:25:10.219 11:35:54 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4405"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:25:10.219 11:35:54 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:25:10.219 11:35:54 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:25:10.219 11:35:54 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4405' -c /tmp/fuzz_json_5.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 -Z 5 00:25:10.219 [2024-06-10 11:35:54.050615] Starting SPDK v24.09-pre git sha1 a3f6419f1 / DPDK 24.03.0 initialization... 00:25:10.219 [2024-06-10 11:35:54.050699] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3330067 ] 00:25:10.219 EAL: No free 2048 kB hugepages reported on node 1 00:25:10.478 [2024-06-10 11:35:54.344355] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:10.737 [2024-06-10 11:35:54.442504] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:25:10.737 [2024-06-10 11:35:54.502724] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:10.737 [2024-06-10 11:35:54.518983] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4405 *** 00:25:10.737 INFO: Running with entropic power schedule (0xFF, 100). 00:25:10.737 INFO: Seed: 3202906883 00:25:10.737 INFO: Loaded 1 modules (357716 inline 8-bit counters): 357716 [0x29a784c, 0x29feda0), 00:25:10.737 INFO: Loaded 1 PC tables (357716 PCs): 357716 [0x29feda0,0x2f742e0), 00:25:10.737 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 00:25:10.737 INFO: A corpus is not provided, starting from an empty corpus 00:25:10.737 #2 INITED exec/s: 0 rss: 65Mb 00:25:10.737 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:25:10.737 This may also happen if the target rejected all inputs we tried so far 00:25:10.737 [2024-06-10 11:35:54.567716] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.737 [2024-06-10 11:35:54.567748] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:10.996 NEW_FUNC[1/687]: 0x48b180 in fuzz_admin_create_io_submission_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:142 00:25:10.996 NEW_FUNC[2/687]: 0x4bf580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:25:10.996 #6 NEW cov: 11876 ft: 11877 corp: 2/13b lim: 45 exec/s: 0 rss: 71Mb L: 12/12 MS: 4 CopyPart-CopyPart-ChangeByte-InsertRepeatedBytes- 00:25:10.996 [2024-06-10 11:35:54.898544] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.996 [2024-06-10 11:35:54.898584] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:10.996 #7 NEW cov: 12006 ft: 12451 corp: 3/25b lim: 45 exec/s: 0 rss: 72Mb L: 12/12 MS: 1 ChangeBinInt- 00:25:11.255 [2024-06-10 11:35:54.948734] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.255 [2024-06-10 11:35:54.948762] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:11.255 [2024-06-10 11:35:54.948816] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.255 [2024-06-10 11:35:54.948830] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:11.255 #14 NEW cov: 12012 ft: 13438 corp: 4/48b lim: 45 exec/s: 0 rss: 72Mb L: 23/23 MS: 2 InsertByte-InsertRepeatedBytes- 00:25:11.255 [2024-06-10 11:35:54.989020] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.255 [2024-06-10 11:35:54.989046] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:11.255 [2024-06-10 11:35:54.989101] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffff00d7 cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.255 [2024-06-10 11:35:54.989115] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:11.255 [2024-06-10 11:35:54.989165] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.255 [2024-06-10 11:35:54.989179] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:11.255 #15 NEW cov: 12097 ft: 13934 corp: 5/76b lim: 45 exec/s: 0 rss: 72Mb L: 28/28 MS: 1 InsertRepeatedBytes- 00:25:11.255 [2024-06-10 11:35:55.038836] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.255 [2024-06-10 11:35:55.038861] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:11.255 #16 NEW cov: 12097 ft: 14129 corp: 6/88b lim: 45 exec/s: 0 rss: 72Mb L: 12/28 MS: 1 ChangeByte- 00:25:11.255 [2024-06-10 11:35:55.078939] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:dd4f010e cdw11:19fe0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.255 [2024-06-10 11:35:55.078964] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:11.255 #18 NEW cov: 12097 ft: 14175 corp: 7/97b lim: 45 exec/s: 0 rss: 72Mb L: 9/28 MS: 2 ChangeBit-CMP- DE: "\001\016\335O\031\376\001\370"- 00:25:11.255 [2024-06-10 11:35:55.119379] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.255 [2024-06-10 11:35:55.119409] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:11.255 [2024-06-10 11:35:55.119480] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.255 [2024-06-10 11:35:55.119494] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:11.255 [2024-06-10 11:35:55.119544] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.255 [2024-06-10 11:35:55.119557] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:11.255 #19 NEW cov: 12097 ft: 14230 corp: 8/126b lim: 45 exec/s: 0 rss: 72Mb L: 29/29 MS: 1 CopyPart- 00:25:11.255 [2024-06-10 11:35:55.169238] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.255 [2024-06-10 11:35:55.169269] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:11.255 #20 NEW cov: 12097 ft: 14262 corp: 9/139b lim: 45 exec/s: 0 rss: 72Mb L: 13/29 MS: 1 CrossOver- 00:25:11.515 [2024-06-10 11:35:55.209537] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.515 [2024-06-10 11:35:55.209562] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:11.515 [2024-06-10 11:35:55.209613] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.515 [2024-06-10 11:35:55.209627] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:11.515 #21 NEW cov: 12097 ft: 14306 corp: 10/162b lim: 45 exec/s: 0 rss: 72Mb L: 23/29 MS: 1 ChangeByte- 00:25:11.515 [2024-06-10 11:35:55.249464] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:dd4f010e cdw11:19fe0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.515 [2024-06-10 11:35:55.249489] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:11.515 #22 NEW cov: 12097 ft: 14372 corp: 11/171b lim: 45 exec/s: 0 rss: 72Mb L: 9/29 MS: 1 CopyPart- 00:25:11.515 [2024-06-10 11:35:55.299636] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:2a000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.515 [2024-06-10 11:35:55.299663] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:11.515 #23 NEW cov: 12097 ft: 14389 corp: 12/184b lim: 45 exec/s: 0 rss: 72Mb L: 13/29 MS: 1 InsertByte- 00:25:11.515 [2024-06-10 11:35:55.339700] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:0edd0101 cdw11:4f190007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.515 [2024-06-10 11:35:55.339727] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:11.515 #24 NEW cov: 12097 ft: 14412 corp: 13/201b lim: 45 exec/s: 0 rss: 72Mb L: 17/29 MS: 1 PersAutoDict- DE: "\001\016\335O\031\376\001\370"- 00:25:11.515 [2024-06-10 11:35:55.379825] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.515 [2024-06-10 11:35:55.379851] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:11.515 #25 NEW cov: 12097 ft: 14429 corp: 14/214b lim: 45 exec/s: 0 rss: 72Mb L: 13/29 MS: 1 InsertByte- 00:25:11.515 [2024-06-10 11:35:55.420097] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.515 [2024-06-10 11:35:55.420123] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:11.515 [2024-06-10 11:35:55.420177] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:0a0100d7 cdw11:0edd0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.515 [2024-06-10 11:35:55.420191] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:11.515 #26 NEW cov: 12097 ft: 14473 corp: 15/234b lim: 45 exec/s: 0 rss: 72Mb L: 20/29 MS: 1 PersAutoDict- DE: "\001\016\335O\031\376\001\370"- 00:25:11.515 [2024-06-10 11:35:55.460395] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.515 [2024-06-10 11:35:55.460422] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:11.515 [2024-06-10 11:35:55.460477] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:19fedd4f cdw11:01f80007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.515 [2024-06-10 11:35:55.460491] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:11.515 [2024-06-10 11:35:55.460539] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.515 [2024-06-10 11:35:55.460552] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:11.774 NEW_FUNC[1/1]: 0x1a77150 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:601 00:25:11.774 #27 NEW cov: 12120 ft: 14517 corp: 16/262b lim: 45 exec/s: 0 rss: 72Mb L: 28/29 MS: 1 PersAutoDict- DE: "\001\016\335O\031\376\001\370"- 00:25:11.774 [2024-06-10 11:35:55.510218] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.774 [2024-06-10 11:35:55.510250] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:11.774 #28 NEW cov: 12120 ft: 14535 corp: 17/276b lim: 45 exec/s: 0 rss: 72Mb L: 14/29 MS: 1 CrossOver- 00:25:11.774 [2024-06-10 11:35:55.550298] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.774 [2024-06-10 11:35:55.550325] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:11.774 #29 NEW cov: 12120 ft: 14549 corp: 18/288b lim: 45 exec/s: 29 rss: 73Mb L: 12/29 MS: 1 CrossOver- 00:25:11.774 [2024-06-10 11:35:55.600598] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.774 [2024-06-10 11:35:55.600623] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:11.774 [2024-06-10 11:35:55.600677] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.774 [2024-06-10 11:35:55.600691] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:11.774 #30 NEW cov: 12120 ft: 14585 corp: 19/309b lim: 45 exec/s: 30 rss: 73Mb L: 21/29 MS: 1 EraseBytes- 00:25:11.774 [2024-06-10 11:35:55.640557] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:dd4f010e cdw11:19fe0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.774 [2024-06-10 11:35:55.640583] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:11.774 #31 NEW cov: 12120 ft: 14628 corp: 20/318b lim: 45 exec/s: 31 rss: 73Mb L: 9/29 MS: 1 ChangeByte- 00:25:11.774 [2024-06-10 11:35:55.690671] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.774 [2024-06-10 11:35:55.690696] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:11.774 #32 NEW cov: 12120 ft: 14631 corp: 21/330b lim: 45 exec/s: 32 rss: 73Mb L: 12/29 MS: 1 ChangeByte- 00:25:12.033 [2024-06-10 11:35:55.731147] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.033 [2024-06-10 11:35:55.731174] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:12.033 [2024-06-10 11:35:55.731241] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffff00d7 cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.033 [2024-06-10 11:35:55.731261] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:12.033 [2024-06-10 11:35:55.731313] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.033 [2024-06-10 11:35:55.731329] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:12.033 #33 NEW cov: 12120 ft: 14667 corp: 22/358b lim: 45 exec/s: 33 rss: 73Mb L: 28/29 MS: 1 ShuffleBytes- 00:25:12.033 [2024-06-10 11:35:55.771266] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:dd4f010e cdw11:19fe0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.033 [2024-06-10 11:35:55.771294] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:12.033 [2024-06-10 11:35:55.771346] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.033 [2024-06-10 11:35:55.771361] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:12.033 [2024-06-10 11:35:55.771412] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.033 [2024-06-10 11:35:55.771425] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:12.033 #34 NEW cov: 12120 ft: 14693 corp: 23/387b lim: 45 exec/s: 34 rss: 73Mb L: 29/29 MS: 1 PersAutoDict- DE: "\001\016\335O\031\376\001\370"- 00:25:12.033 [2024-06-10 11:35:55.821390] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.033 [2024-06-10 11:35:55.821415] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:12.033 [2024-06-10 11:35:55.821468] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffff002e cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.033 [2024-06-10 11:35:55.821482] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:12.033 [2024-06-10 11:35:55.821532] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.033 [2024-06-10 11:35:55.821546] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:12.033 #35 NEW cov: 12120 ft: 14712 corp: 24/414b lim: 45 exec/s: 35 rss: 73Mb L: 27/29 MS: 1 InsertRepeatedBytes- 00:25:12.033 [2024-06-10 11:35:55.861204] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00010000 cdw11:0edd0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.033 [2024-06-10 11:35:55.861232] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:12.033 #36 NEW cov: 12120 ft: 14725 corp: 25/426b lim: 45 exec/s: 36 rss: 73Mb L: 12/29 MS: 1 PersAutoDict- DE: "\001\016\335O\031\376\001\370"- 00:25:12.033 [2024-06-10 11:35:55.901315] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00f70000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.033 [2024-06-10 11:35:55.901340] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:12.033 #37 NEW cov: 12120 ft: 14734 corp: 26/439b lim: 45 exec/s: 37 rss: 73Mb L: 13/29 MS: 1 InsertByte- 00:25:12.033 [2024-06-10 11:35:55.941399] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:014c0100 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.033 [2024-06-10 11:35:55.941424] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:12.033 #38 NEW cov: 12120 ft: 14750 corp: 27/451b lim: 45 exec/s: 38 rss: 73Mb L: 12/29 MS: 1 CMP- DE: "\001\000\001L"- 00:25:12.033 [2024-06-10 11:35:55.981532] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00010a01 cdw11:4c000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.033 [2024-06-10 11:35:55.981557] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:12.292 #39 NEW cov: 12120 ft: 14779 corp: 28/464b lim: 45 exec/s: 39 rss: 73Mb L: 13/29 MS: 1 PersAutoDict- DE: "\001\000\001L"- 00:25:12.292 [2024-06-10 11:35:56.031994] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.292 [2024-06-10 11:35:56.032019] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:12.292 [2024-06-10 11:35:56.032072] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffff00d7 cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.292 [2024-06-10 11:35:56.032085] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:12.292 [2024-06-10 11:35:56.032135] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.292 [2024-06-10 11:35:56.032149] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:12.292 #40 NEW cov: 12120 ft: 14819 corp: 29/492b lim: 45 exec/s: 40 rss: 73Mb L: 28/29 MS: 1 ShuffleBytes- 00:25:12.292 [2024-06-10 11:35:56.071813] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffff38ff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.292 [2024-06-10 11:35:56.071838] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:12.292 #44 NEW cov: 12120 ft: 14835 corp: 30/508b lim: 45 exec/s: 44 rss: 73Mb L: 16/29 MS: 4 ChangeBinInt-ChangeBit-ChangeBinInt-CrossOver- 00:25:12.292 [2024-06-10 11:35:56.111913] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:41000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.292 [2024-06-10 11:35:56.111938] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:12.292 #45 NEW cov: 12120 ft: 14852 corp: 31/521b lim: 45 exec/s: 45 rss: 73Mb L: 13/29 MS: 1 InsertByte- 00:25:12.292 [2024-06-10 11:35:56.162536] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:dd4f010e cdw11:19fe0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.292 [2024-06-10 11:35:56.162562] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:12.292 [2024-06-10 11:35:56.162616] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.292 [2024-06-10 11:35:56.162630] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:12.292 [2024-06-10 11:35:56.162680] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.292 [2024-06-10 11:35:56.162693] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:12.293 [2024-06-10 11:35:56.162742] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:19fedd4f cdw11:01f80007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.293 [2024-06-10 11:35:56.162755] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:12.293 #46 NEW cov: 12120 ft: 15187 corp: 32/558b lim: 45 exec/s: 46 rss: 73Mb L: 37/37 MS: 1 PersAutoDict- DE: "\001\016\335O\031\376\001\370"- 00:25:12.293 [2024-06-10 11:35:56.212190] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00190000 cdw11:fe010007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.293 [2024-06-10 11:35:56.212216] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:12.552 #48 NEW cov: 12120 ft: 15194 corp: 33/571b lim: 45 exec/s: 48 rss: 73Mb L: 13/37 MS: 2 EraseBytes-InsertRepeatedBytes- 00:25:12.552 [2024-06-10 11:35:56.262333] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:27000a00 cdw11:27010000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.552 [2024-06-10 11:35:56.262359] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:12.552 #53 NEW cov: 12120 ft: 15217 corp: 34/581b lim: 45 exec/s: 53 rss: 73Mb L: 10/37 MS: 5 PersAutoDict-CrossOver-ShuffleBytes-InsertByte-CopyPart- DE: "\001\000\001L"- 00:25:12.552 [2024-06-10 11:35:56.302458] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:014c0100 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.552 [2024-06-10 11:35:56.302483] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:12.552 #59 NEW cov: 12120 ft: 15264 corp: 35/593b lim: 45 exec/s: 59 rss: 73Mb L: 12/37 MS: 1 CopyPart- 00:25:12.552 [2024-06-10 11:35:56.352744] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ff4fffff cdw11:19fe0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.552 [2024-06-10 11:35:56.352770] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:12.552 [2024-06-10 11:35:56.352824] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.552 [2024-06-10 11:35:56.352838] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:12.552 #60 NEW cov: 12120 ft: 15274 corp: 36/616b lim: 45 exec/s: 60 rss: 73Mb L: 23/37 MS: 1 CrossOver- 00:25:12.552 [2024-06-10 11:35:56.402887] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.552 [2024-06-10 11:35:56.402913] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:12.552 [2024-06-10 11:35:56.402982] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffff002e cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.552 [2024-06-10 11:35:56.402996] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:12.552 #61 NEW cov: 12120 ft: 15332 corp: 37/639b lim: 45 exec/s: 61 rss: 73Mb L: 23/37 MS: 1 EraseBytes- 00:25:12.552 [2024-06-10 11:35:56.452899] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00010000 cdw11:0edd0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.552 [2024-06-10 11:35:56.452924] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:12.552 #62 NEW cov: 12120 ft: 15359 corp: 38/651b lim: 45 exec/s: 62 rss: 74Mb L: 12/37 MS: 1 PersAutoDict- DE: "\001\000\001L"- 00:25:12.552 [2024-06-10 11:35:56.493321] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000040 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.552 [2024-06-10 11:35:56.493346] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:12.552 [2024-06-10 11:35:56.493399] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffff002e cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.552 [2024-06-10 11:35:56.493413] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:12.552 [2024-06-10 11:35:56.493481] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.552 [2024-06-10 11:35:56.493495] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:12.810 #63 NEW cov: 12120 ft: 15366 corp: 39/678b lim: 45 exec/s: 63 rss: 74Mb L: 27/37 MS: 1 ChangeBit- 00:25:12.810 [2024-06-10 11:35:56.533424] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.810 [2024-06-10 11:35:56.533449] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:12.810 [2024-06-10 11:35:56.533501] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffff00d7 cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.810 [2024-06-10 11:35:56.533515] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:12.810 [2024-06-10 11:35:56.533565] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:7effffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.810 [2024-06-10 11:35:56.533577] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:12.810 #64 pulse cov: 12120 ft: 15381 corp: 39/678b lim: 45 exec/s: 32 rss: 74Mb 00:25:12.810 #64 NEW cov: 12120 ft: 15381 corp: 40/706b lim: 45 exec/s: 32 rss: 74Mb L: 28/37 MS: 1 ChangeByte- 00:25:12.811 #64 DONE cov: 12120 ft: 15381 corp: 40/706b lim: 45 exec/s: 32 rss: 74Mb 00:25:12.811 ###### Recommended dictionary. ###### 00:25:12.811 "\001\016\335O\031\376\001\370" # Uses: 6 00:25:12.811 "\001\000\001L" # Uses: 3 00:25:12.811 ###### End of recommended dictionary. ###### 00:25:12.811 Done 64 runs in 2 second(s) 00:25:12.811 11:35:56 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_5.conf /var/tmp/suppress_nvmf_fuzz 00:25:12.811 11:35:56 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:25:12.811 11:35:56 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:25:12.811 11:35:56 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 6 1 0x1 00:25:12.811 11:35:56 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=6 00:25:12.811 11:35:56 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:25:12.811 11:35:56 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:25:12.811 11:35:56 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 00:25:12.811 11:35:56 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_6.conf 00:25:12.811 11:35:56 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:25:12.811 11:35:56 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:25:12.811 11:35:56 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 6 00:25:12.811 11:35:56 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4406 00:25:12.811 11:35:56 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 00:25:12.811 11:35:56 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4406' 00:25:12.811 11:35:56 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4406"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:25:12.811 11:35:56 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:25:12.811 11:35:56 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:25:12.811 11:35:56 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4406' -c /tmp/fuzz_json_6.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 -Z 6 00:25:13.069 [2024-06-10 11:35:56.760992] Starting SPDK v24.09-pre git sha1 a3f6419f1 / DPDK 24.03.0 initialization... 00:25:13.070 [2024-06-10 11:35:56.761075] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3330437 ] 00:25:13.070 EAL: No free 2048 kB hugepages reported on node 1 00:25:13.328 [2024-06-10 11:35:57.081075] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:13.328 [2024-06-10 11:35:57.174655] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:25:13.328 [2024-06-10 11:35:57.234941] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:13.328 [2024-06-10 11:35:57.251187] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4406 *** 00:25:13.328 INFO: Running with entropic power schedule (0xFF, 100). 00:25:13.328 INFO: Seed: 1641953087 00:25:13.587 INFO: Loaded 1 modules (357716 inline 8-bit counters): 357716 [0x29a784c, 0x29feda0), 00:25:13.587 INFO: Loaded 1 PC tables (357716 PCs): 357716 [0x29feda0,0x2f742e0), 00:25:13.587 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 00:25:13.587 INFO: A corpus is not provided, starting from an empty corpus 00:25:13.587 #2 INITED exec/s: 0 rss: 65Mb 00:25:13.587 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:25:13.587 This may also happen if the target rejected all inputs we tried so far 00:25:13.587 [2024-06-10 11:35:57.306788] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:25:13.587 [2024-06-10 11:35:57.306819] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:13.587 [2024-06-10 11:35:57.306889] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:25:13.587 [2024-06-10 11:35:57.306903] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:13.587 [2024-06-10 11:35:57.306958] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:25:13.587 [2024-06-10 11:35:57.306972] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:13.846 NEW_FUNC[1/685]: 0x48d990 in fuzz_admin_delete_io_completion_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:161 00:25:13.846 NEW_FUNC[2/685]: 0x4bf580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:25:13.846 #5 NEW cov: 11793 ft: 11777 corp: 2/8b lim: 10 exec/s: 0 rss: 71Mb L: 7/7 MS: 3 CopyPart-ShuffleBytes-InsertRepeatedBytes- 00:25:13.846 [2024-06-10 11:35:57.627335] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:25:13.846 [2024-06-10 11:35:57.627377] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:13.846 #6 NEW cov: 11923 ft: 12624 corp: 3/10b lim: 10 exec/s: 0 rss: 71Mb L: 2/7 MS: 1 CrossOver- 00:25:13.846 [2024-06-10 11:35:57.667553] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:25:13.846 [2024-06-10 11:35:57.667580] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:13.846 [2024-06-10 11:35:57.667646] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:25:13.846 [2024-06-10 11:35:57.667660] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:13.846 [2024-06-10 11:35:57.667711] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:25:13.846 [2024-06-10 11:35:57.667724] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:13.846 #7 NEW cov: 11929 ft: 12895 corp: 4/17b lim: 10 exec/s: 0 rss: 72Mb L: 7/7 MS: 1 InsertRepeatedBytes- 00:25:13.846 [2024-06-10 11:35:57.717732] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:25:13.846 [2024-06-10 11:35:57.717757] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:13.846 [2024-06-10 11:35:57.717824] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:25:13.847 [2024-06-10 11:35:57.717838] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:13.847 [2024-06-10 11:35:57.717890] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000fffd cdw11:00000000 00:25:13.847 [2024-06-10 11:35:57.717903] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:13.847 #13 NEW cov: 12014 ft: 13133 corp: 5/24b lim: 10 exec/s: 0 rss: 72Mb L: 7/7 MS: 1 ChangeBit- 00:25:13.847 [2024-06-10 11:35:57.767768] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:25:13.847 [2024-06-10 11:35:57.767792] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:13.847 [2024-06-10 11:35:57.767859] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000a0a cdw11:00000000 00:25:13.847 [2024-06-10 11:35:57.767873] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:13.847 #19 NEW cov: 12014 ft: 13382 corp: 6/28b lim: 10 exec/s: 0 rss: 72Mb L: 4/7 MS: 1 CopyPart- 00:25:14.106 [2024-06-10 11:35:57.808141] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:25:14.106 [2024-06-10 11:35:57.808166] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:14.106 [2024-06-10 11:35:57.808235] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:25:14.106 [2024-06-10 11:35:57.808254] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:14.106 [2024-06-10 11:35:57.808304] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:25:14.106 [2024-06-10 11:35:57.808328] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:14.106 [2024-06-10 11:35:57.808381] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:25:14.106 [2024-06-10 11:35:57.808395] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:14.106 #21 NEW cov: 12014 ft: 13678 corp: 7/37b lim: 10 exec/s: 0 rss: 72Mb L: 9/9 MS: 2 CrossOver-InsertRepeatedBytes- 00:25:14.106 [2024-06-10 11:35:57.848080] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:25:14.106 [2024-06-10 11:35:57.848106] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:14.106 [2024-06-10 11:35:57.848157] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:25:14.106 [2024-06-10 11:35:57.848171] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:14.106 [2024-06-10 11:35:57.848221] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:25:14.106 [2024-06-10 11:35:57.848234] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:14.106 #22 NEW cov: 12014 ft: 13773 corp: 8/44b lim: 10 exec/s: 0 rss: 72Mb L: 7/9 MS: 1 ShuffleBytes- 00:25:14.106 [2024-06-10 11:35:57.887966] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a42 cdw11:00000000 00:25:14.106 [2024-06-10 11:35:57.887991] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:14.106 #23 NEW cov: 12014 ft: 13816 corp: 9/47b lim: 10 exec/s: 0 rss: 72Mb L: 3/9 MS: 1 InsertByte- 00:25:14.106 [2024-06-10 11:35:57.928322] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a1a cdw11:00000000 00:25:14.106 [2024-06-10 11:35:57.928346] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:14.106 [2024-06-10 11:35:57.928415] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:25:14.106 [2024-06-10 11:35:57.928428] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:14.106 [2024-06-10 11:35:57.928481] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000fffd cdw11:00000000 00:25:14.106 [2024-06-10 11:35:57.928494] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:14.106 #24 NEW cov: 12014 ft: 13920 corp: 10/54b lim: 10 exec/s: 0 rss: 72Mb L: 7/9 MS: 1 ChangeBit- 00:25:14.106 [2024-06-10 11:35:57.978254] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:25:14.106 [2024-06-10 11:35:57.978280] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:14.106 #25 NEW cov: 12014 ft: 14021 corp: 11/56b lim: 10 exec/s: 0 rss: 72Mb L: 2/9 MS: 1 CrossOver- 00:25:14.106 [2024-06-10 11:35:58.028494] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000aff cdw11:00000000 00:25:14.106 [2024-06-10 11:35:58.028521] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:14.106 [2024-06-10 11:35:58.028587] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:25:14.106 [2024-06-10 11:35:58.028601] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:14.106 #26 NEW cov: 12014 ft: 14041 corp: 12/60b lim: 10 exec/s: 0 rss: 72Mb L: 4/9 MS: 1 EraseBytes- 00:25:14.366 [2024-06-10 11:35:58.068724] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a1a cdw11:00000000 00:25:14.366 [2024-06-10 11:35:58.068751] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:14.366 [2024-06-10 11:35:58.068822] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:25:14.366 [2024-06-10 11:35:58.068836] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:14.366 [2024-06-10 11:35:58.068887] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ff00 cdw11:00000000 00:25:14.366 [2024-06-10 11:35:58.068901] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:14.366 #27 NEW cov: 12014 ft: 14092 corp: 13/67b lim: 10 exec/s: 0 rss: 72Mb L: 7/9 MS: 1 CrossOver- 00:25:14.366 [2024-06-10 11:35:58.118990] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a1a cdw11:00000000 00:25:14.366 [2024-06-10 11:35:58.119017] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:14.366 [2024-06-10 11:35:58.119068] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ff02 cdw11:00000000 00:25:14.366 [2024-06-10 11:35:58.119082] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:14.366 [2024-06-10 11:35:58.119131] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:25:14.366 [2024-06-10 11:35:58.119145] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:14.366 [2024-06-10 11:35:58.119195] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:25:14.366 [2024-06-10 11:35:58.119208] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:14.366 #28 NEW cov: 12014 ft: 14128 corp: 14/75b lim: 10 exec/s: 0 rss: 72Mb L: 8/9 MS: 1 InsertByte- 00:25:14.366 [2024-06-10 11:35:58.169095] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000ae5 cdw11:00000000 00:25:14.366 [2024-06-10 11:35:58.169123] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:14.366 [2024-06-10 11:35:58.169188] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000f902 cdw11:00000000 00:25:14.366 [2024-06-10 11:35:58.169202] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:14.366 [2024-06-10 11:35:58.169255] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:25:14.366 [2024-06-10 11:35:58.169269] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:14.366 [2024-06-10 11:35:58.169322] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:25:14.366 [2024-06-10 11:35:58.169336] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:14.366 NEW_FUNC[1/1]: 0x1a77150 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:601 00:25:14.366 #29 NEW cov: 12037 ft: 14203 corp: 15/83b lim: 10 exec/s: 0 rss: 73Mb L: 8/9 MS: 1 ChangeBinInt- 00:25:14.366 [2024-06-10 11:35:58.219035] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000aff cdw11:00000000 00:25:14.366 [2024-06-10 11:35:58.219061] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:14.366 [2024-06-10 11:35:58.219116] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:25:14.366 [2024-06-10 11:35:58.219130] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:14.366 #30 NEW cov: 12037 ft: 14248 corp: 16/87b lim: 10 exec/s: 0 rss: 73Mb L: 4/9 MS: 1 ShuffleBytes- 00:25:14.366 [2024-06-10 11:35:58.269073] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00003f1a cdw11:00000000 00:25:14.366 [2024-06-10 11:35:58.269098] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:14.366 #32 NEW cov: 12037 ft: 14283 corp: 17/89b lim: 10 exec/s: 32 rss: 73Mb L: 2/9 MS: 2 ChangeBit-InsertByte- 00:25:14.366 [2024-06-10 11:35:58.309512] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:25:14.366 [2024-06-10 11:35:58.309537] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:14.366 [2024-06-10 11:35:58.309587] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:25:14.366 [2024-06-10 11:35:58.309602] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:14.366 [2024-06-10 11:35:58.309651] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:25:14.366 [2024-06-10 11:35:58.309665] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:14.366 [2024-06-10 11:35:58.309713] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:25:14.366 [2024-06-10 11:35:58.309726] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:14.625 #33 NEW cov: 12037 ft: 14293 corp: 18/98b lim: 10 exec/s: 33 rss: 73Mb L: 9/9 MS: 1 CMP- DE: "\000\000"- 00:25:14.625 [2024-06-10 11:35:58.359786] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a1a cdw11:00000000 00:25:14.625 [2024-06-10 11:35:58.359811] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:14.625 [2024-06-10 11:35:58.359878] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ff02 cdw11:00000000 00:25:14.625 [2024-06-10 11:35:58.359892] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:14.625 [2024-06-10 11:35:58.359942] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000aff cdw11:00000000 00:25:14.625 [2024-06-10 11:35:58.359956] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:14.625 [2024-06-10 11:35:58.360008] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000ff0a cdw11:00000000 00:25:14.625 [2024-06-10 11:35:58.360021] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:14.625 [2024-06-10 11:35:58.360075] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:0000ff00 cdw11:00000000 00:25:14.625 [2024-06-10 11:35:58.360088] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:14.625 #34 NEW cov: 12037 ft: 14345 corp: 19/108b lim: 10 exec/s: 34 rss: 73Mb L: 10/10 MS: 1 CrossOver- 00:25:14.625 [2024-06-10 11:35:58.399558] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a42 cdw11:00000000 00:25:14.625 [2024-06-10 11:35:58.399582] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:14.625 [2024-06-10 11:35:58.399654] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000420a cdw11:00000000 00:25:14.625 [2024-06-10 11:35:58.399668] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:14.625 #35 NEW cov: 12037 ft: 14349 corp: 20/113b lim: 10 exec/s: 35 rss: 73Mb L: 5/10 MS: 1 CopyPart- 00:25:14.625 [2024-06-10 11:35:58.450020] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:25:14.625 [2024-06-10 11:35:58.450045] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:14.625 [2024-06-10 11:35:58.450098] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:25:14.625 [2024-06-10 11:35:58.450111] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:14.625 [2024-06-10 11:35:58.450180] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:25:14.625 [2024-06-10 11:35:58.450194] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:14.625 [2024-06-10 11:35:58.450248] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:25:14.625 [2024-06-10 11:35:58.450262] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:14.625 [2024-06-10 11:35:58.450315] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:00001a0a cdw11:00000000 00:25:14.625 [2024-06-10 11:35:58.450329] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:14.625 #36 NEW cov: 12037 ft: 14368 corp: 21/123b lim: 10 exec/s: 36 rss: 73Mb L: 10/10 MS: 1 CrossOver- 00:25:14.625 [2024-06-10 11:35:58.500086] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a1a cdw11:00000000 00:25:14.625 [2024-06-10 11:35:58.500111] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:14.625 [2024-06-10 11:35:58.500163] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ff02 cdw11:00000000 00:25:14.625 [2024-06-10 11:35:58.500176] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:14.625 [2024-06-10 11:35:58.500226] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000aff cdw11:00000000 00:25:14.625 [2024-06-10 11:35:58.500239] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:14.626 [2024-06-10 11:35:58.500311] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000a1a cdw11:00000000 00:25:14.626 [2024-06-10 11:35:58.500324] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:14.626 #37 NEW cov: 12037 ft: 14394 corp: 22/132b lim: 10 exec/s: 37 rss: 73Mb L: 9/10 MS: 1 CrossOver- 00:25:14.626 [2024-06-10 11:35:58.550191] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:25:14.626 [2024-06-10 11:35:58.550216] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:14.626 [2024-06-10 11:35:58.550267] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:25:14.626 [2024-06-10 11:35:58.550282] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:14.626 [2024-06-10 11:35:58.550333] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:25:14.626 [2024-06-10 11:35:58.550350] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:14.626 [2024-06-10 11:35:58.550402] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00001a0a cdw11:00000000 00:25:14.626 [2024-06-10 11:35:58.550416] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:14.885 #38 NEW cov: 12037 ft: 14480 corp: 23/140b lim: 10 exec/s: 38 rss: 73Mb L: 8/10 MS: 1 EraseBytes- 00:25:14.885 [2024-06-10 11:35:58.600386] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:25:14.885 [2024-06-10 11:35:58.600412] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:14.885 [2024-06-10 11:35:58.600478] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:25:14.885 [2024-06-10 11:35:58.600493] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:14.885 [2024-06-10 11:35:58.600543] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:25:14.885 [2024-06-10 11:35:58.600557] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:14.885 [2024-06-10 11:35:58.600608] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:25:14.885 [2024-06-10 11:35:58.600622] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:14.885 #39 NEW cov: 12037 ft: 14493 corp: 24/149b lim: 10 exec/s: 39 rss: 73Mb L: 9/10 MS: 1 ChangeByte- 00:25:14.885 [2024-06-10 11:35:58.640456] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:25:14.885 [2024-06-10 11:35:58.640481] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:14.885 [2024-06-10 11:35:58.640549] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:25:14.885 [2024-06-10 11:35:58.640563] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:14.885 [2024-06-10 11:35:58.640616] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:25:14.885 [2024-06-10 11:35:58.640629] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:14.885 [2024-06-10 11:35:58.640684] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000000a cdw11:00000000 00:25:14.885 [2024-06-10 11:35:58.640697] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:14.885 #40 NEW cov: 12037 ft: 14494 corp: 25/157b lim: 10 exec/s: 40 rss: 73Mb L: 8/10 MS: 1 EraseBytes- 00:25:14.885 [2024-06-10 11:35:58.680353] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000abe cdw11:00000000 00:25:14.885 [2024-06-10 11:35:58.680378] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:14.885 [2024-06-10 11:35:58.680445] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000bdf5 cdw11:00000000 00:25:14.885 [2024-06-10 11:35:58.680459] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:14.885 #41 NEW cov: 12037 ft: 14520 corp: 26/162b lim: 10 exec/s: 41 rss: 73Mb L: 5/10 MS: 1 ChangeBinInt- 00:25:14.885 [2024-06-10 11:35:58.730578] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:25:14.885 [2024-06-10 11:35:58.730605] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:14.885 [2024-06-10 11:35:58.730674] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:25:14.885 [2024-06-10 11:35:58.730687] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:14.885 [2024-06-10 11:35:58.730740] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:25:14.885 [2024-06-10 11:35:58.730754] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:14.885 #42 NEW cov: 12037 ft: 14525 corp: 27/169b lim: 10 exec/s: 42 rss: 73Mb L: 7/10 MS: 1 ShuffleBytes- 00:25:14.885 [2024-06-10 11:35:58.770798] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:25:14.885 [2024-06-10 11:35:58.770822] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:14.885 [2024-06-10 11:35:58.770891] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:25:14.885 [2024-06-10 11:35:58.770905] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:14.885 [2024-06-10 11:35:58.770959] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:25:14.885 [2024-06-10 11:35:58.770972] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:14.885 [2024-06-10 11:35:58.771026] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:000000f8 cdw11:00000000 00:25:14.885 [2024-06-10 11:35:58.771040] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:14.885 #43 NEW cov: 12037 ft: 14532 corp: 28/177b lim: 10 exec/s: 43 rss: 73Mb L: 8/10 MS: 1 ChangeBinInt- 00:25:14.885 [2024-06-10 11:35:58.820845] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:25:14.885 [2024-06-10 11:35:58.820870] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:14.885 [2024-06-10 11:35:58.820923] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000a0a cdw11:00000000 00:25:14.885 [2024-06-10 11:35:58.820937] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:14.885 [2024-06-10 11:35:58.820987] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:25:14.885 [2024-06-10 11:35:58.821000] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:15.145 #44 NEW cov: 12037 ft: 14542 corp: 29/184b lim: 10 exec/s: 44 rss: 73Mb L: 7/10 MS: 1 CrossOver- 00:25:15.145 [2024-06-10 11:35:58.861089] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:25:15.145 [2024-06-10 11:35:58.861115] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:15.145 [2024-06-10 11:35:58.861167] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:25:15.145 [2024-06-10 11:35:58.861181] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:15.145 [2024-06-10 11:35:58.861230] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:25:15.145 [2024-06-10 11:35:58.861254] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:15.145 [2024-06-10 11:35:58.861321] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00003aff cdw11:00000000 00:25:15.145 [2024-06-10 11:35:58.861334] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:15.145 #45 NEW cov: 12037 ft: 14552 corp: 30/192b lim: 10 exec/s: 45 rss: 74Mb L: 8/10 MS: 1 InsertByte- 00:25:15.145 [2024-06-10 11:35:58.911242] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:25:15.145 [2024-06-10 11:35:58.911271] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:15.145 [2024-06-10 11:35:58.911325] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:25:15.145 [2024-06-10 11:35:58.911339] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:15.145 [2024-06-10 11:35:58.911392] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000020 cdw11:00000000 00:25:15.145 [2024-06-10 11:35:58.911405] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:15.145 [2024-06-10 11:35:58.911459] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:25:15.145 [2024-06-10 11:35:58.911472] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:15.145 #46 NEW cov: 12037 ft: 14561 corp: 31/201b lim: 10 exec/s: 46 rss: 74Mb L: 9/10 MS: 1 ChangeBit- 00:25:15.145 [2024-06-10 11:35:58.951367] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:25:15.145 [2024-06-10 11:35:58.951392] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:15.145 [2024-06-10 11:35:58.951444] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:25:15.145 [2024-06-10 11:35:58.951458] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:15.145 [2024-06-10 11:35:58.951507] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:25:15.145 [2024-06-10 11:35:58.951520] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:15.145 [2024-06-10 11:35:58.951571] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:25:15.145 [2024-06-10 11:35:58.951584] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:15.145 [2024-06-10 11:35:58.951635] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:0000000a cdw11:00000000 00:25:15.145 [2024-06-10 11:35:58.951648] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:15.145 #47 NEW cov: 12037 ft: 14566 corp: 32/211b lim: 10 exec/s: 47 rss: 74Mb L: 10/10 MS: 1 CopyPart- 00:25:15.145 [2024-06-10 11:35:59.001549] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000001e cdw11:00000000 00:25:15.145 [2024-06-10 11:35:59.001574] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:15.145 [2024-06-10 11:35:59.001644] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:25:15.145 [2024-06-10 11:35:59.001658] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:15.145 [2024-06-10 11:35:59.001715] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:25:15.145 [2024-06-10 11:35:59.001729] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:15.145 [2024-06-10 11:35:59.001782] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:25:15.145 [2024-06-10 11:35:59.001795] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:15.145 [2024-06-10 11:35:59.001847] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:0000000a cdw11:00000000 00:25:15.145 [2024-06-10 11:35:59.001860] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:15.145 #48 NEW cov: 12037 ft: 14570 corp: 33/221b lim: 10 exec/s: 48 rss: 74Mb L: 10/10 MS: 1 InsertByte- 00:25:15.145 [2024-06-10 11:35:59.041320] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000f6b8 cdw11:00000000 00:25:15.145 [2024-06-10 11:35:59.041344] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:15.145 [2024-06-10 11:35:59.041398] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000420a cdw11:00000000 00:25:15.145 [2024-06-10 11:35:59.041411] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:15.145 #49 NEW cov: 12037 ft: 14574 corp: 34/226b lim: 10 exec/s: 49 rss: 74Mb L: 5/10 MS: 1 ChangeBinInt- 00:25:15.145 [2024-06-10 11:35:59.081551] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:25:15.145 [2024-06-10 11:35:59.081575] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:15.145 [2024-06-10 11:35:59.081642] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:25:15.145 [2024-06-10 11:35:59.081657] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:15.145 [2024-06-10 11:35:59.081708] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:25:15.145 [2024-06-10 11:35:59.081721] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:15.407 #50 NEW cov: 12037 ft: 14647 corp: 35/233b lim: 10 exec/s: 50 rss: 74Mb L: 7/10 MS: 1 InsertRepeatedBytes- 00:25:15.407 [2024-06-10 11:35:59.121954] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:25:15.407 [2024-06-10 11:35:59.121979] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:15.407 [2024-06-10 11:35:59.122033] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:25:15.407 [2024-06-10 11:35:59.122047] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:15.407 [2024-06-10 11:35:59.122100] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:25:15.407 [2024-06-10 11:35:59.122112] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:15.407 [2024-06-10 11:35:59.122164] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:25:15.407 [2024-06-10 11:35:59.122177] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:15.407 [2024-06-10 11:35:59.122229] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:0000000a cdw11:00000000 00:25:15.407 [2024-06-10 11:35:59.122248] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:15.407 #51 NEW cov: 12037 ft: 14663 corp: 36/243b lim: 10 exec/s: 51 rss: 74Mb L: 10/10 MS: 1 CopyPart- 00:25:15.407 [2024-06-10 11:35:59.171942] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:25:15.407 [2024-06-10 11:35:59.171967] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:15.407 [2024-06-10 11:35:59.172035] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000aff cdw11:00000000 00:25:15.407 [2024-06-10 11:35:59.172050] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:15.407 [2024-06-10 11:35:59.172102] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:25:15.407 [2024-06-10 11:35:59.172115] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:15.407 [2024-06-10 11:35:59.172165] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000ff3a cdw11:00000000 00:25:15.407 [2024-06-10 11:35:59.172179] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:15.407 #53 NEW cov: 12037 ft: 14673 corp: 37/252b lim: 10 exec/s: 53 rss: 74Mb L: 9/10 MS: 2 CopyPart-CrossOver- 00:25:15.407 [2024-06-10 11:35:59.212034] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000aff cdw11:00000000 00:25:15.407 [2024-06-10 11:35:59.212058] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:15.407 [2024-06-10 11:35:59.212124] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:25:15.407 [2024-06-10 11:35:59.212138] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:15.407 [2024-06-10 11:35:59.212189] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000aff cdw11:00000000 00:25:15.407 [2024-06-10 11:35:59.212203] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:15.407 [2024-06-10 11:35:59.212257] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000a3a cdw11:00000000 00:25:15.407 [2024-06-10 11:35:59.212270] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:15.407 #54 NEW cov: 12037 ft: 14678 corp: 38/261b lim: 10 exec/s: 54 rss: 74Mb L: 9/10 MS: 1 ShuffleBytes- 00:25:15.407 [2024-06-10 11:35:59.262292] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:25:15.407 [2024-06-10 11:35:59.262316] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:15.407 [2024-06-10 11:35:59.262388] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:25:15.407 [2024-06-10 11:35:59.262402] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:15.407 [2024-06-10 11:35:59.262456] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:25:15.407 [2024-06-10 11:35:59.262469] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:15.407 [2024-06-10 11:35:59.262523] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:25:15.407 [2024-06-10 11:35:59.262540] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:15.407 [2024-06-10 11:35:59.262591] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:00001a0a cdw11:00000000 00:25:15.407 [2024-06-10 11:35:59.262605] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:15.407 #55 NEW cov: 12037 ft: 14686 corp: 39/271b lim: 10 exec/s: 55 rss: 74Mb L: 10/10 MS: 1 ShuffleBytes- 00:25:15.407 [2024-06-10 11:35:59.302165] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000aff cdw11:00000000 00:25:15.408 [2024-06-10 11:35:59.302189] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:15.408 [2024-06-10 11:35:59.302263] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ff0a cdw11:00000000 00:25:15.408 [2024-06-10 11:35:59.302278] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:15.408 [2024-06-10 11:35:59.302342] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:25:15.408 [2024-06-10 11:35:59.302356] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:15.408 #56 NEW cov: 12037 ft: 14689 corp: 40/278b lim: 10 exec/s: 28 rss: 75Mb L: 7/10 MS: 1 ShuffleBytes- 00:25:15.408 #56 DONE cov: 12037 ft: 14689 corp: 40/278b lim: 10 exec/s: 28 rss: 75Mb 00:25:15.408 ###### Recommended dictionary. ###### 00:25:15.408 "\000\000" # Uses: 0 00:25:15.408 ###### End of recommended dictionary. ###### 00:25:15.408 Done 56 runs in 2 second(s) 00:25:15.702 11:35:59 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_6.conf /var/tmp/suppress_nvmf_fuzz 00:25:15.702 11:35:59 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:25:15.702 11:35:59 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:25:15.702 11:35:59 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 7 1 0x1 00:25:15.702 11:35:59 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=7 00:25:15.702 11:35:59 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:25:15.702 11:35:59 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:25:15.702 11:35:59 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 00:25:15.702 11:35:59 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_7.conf 00:25:15.702 11:35:59 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:25:15.702 11:35:59 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:25:15.702 11:35:59 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 7 00:25:15.702 11:35:59 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4407 00:25:15.702 11:35:59 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 00:25:15.702 11:35:59 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4407' 00:25:15.702 11:35:59 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4407"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:25:15.702 11:35:59 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:25:15.702 11:35:59 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:25:15.702 11:35:59 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4407' -c /tmp/fuzz_json_7.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 -Z 7 00:25:15.702 [2024-06-10 11:35:59.514550] Starting SPDK v24.09-pre git sha1 a3f6419f1 / DPDK 24.03.0 initialization... 00:25:15.702 [2024-06-10 11:35:59.514633] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3330835 ] 00:25:15.702 EAL: No free 2048 kB hugepages reported on node 1 00:25:15.971 [2024-06-10 11:35:59.800886] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:15.971 [2024-06-10 11:35:59.901532] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:25:16.229 [2024-06-10 11:35:59.962282] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:16.229 [2024-06-10 11:35:59.978546] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4407 *** 00:25:16.229 INFO: Running with entropic power schedule (0xFF, 100). 00:25:16.229 INFO: Seed: 73993575 00:25:16.229 INFO: Loaded 1 modules (357716 inline 8-bit counters): 357716 [0x29a784c, 0x29feda0), 00:25:16.229 INFO: Loaded 1 PC tables (357716 PCs): 357716 [0x29feda0,0x2f742e0), 00:25:16.229 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 00:25:16.229 INFO: A corpus is not provided, starting from an empty corpus 00:25:16.229 #2 INITED exec/s: 0 rss: 65Mb 00:25:16.229 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:25:16.229 This may also happen if the target rejected all inputs we tried so far 00:25:16.229 [2024-06-10 11:36:00.034036] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 00:25:16.229 [2024-06-10 11:36:00.034069] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:16.229 [2024-06-10 11:36:00.034133] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:25:16.229 [2024-06-10 11:36:00.034149] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:16.229 [2024-06-10 11:36:00.034200] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:25:16.229 [2024-06-10 11:36:00.034215] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:16.520 NEW_FUNC[1/685]: 0x48e380 in fuzz_admin_delete_io_submission_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:172 00:25:16.520 NEW_FUNC[2/685]: 0x4bf580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:25:16.520 #3 NEW cov: 11793 ft: 11794 corp: 2/8b lim: 10 exec/s: 0 rss: 72Mb L: 7/7 MS: 1 InsertRepeatedBytes- 00:25:16.520 [2024-06-10 11:36:00.384924] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a80 cdw11:00000000 00:25:16.520 [2024-06-10 11:36:00.384971] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:16.520 [2024-06-10 11:36:00.385022] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:25:16.520 [2024-06-10 11:36:00.385036] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:16.520 [2024-06-10 11:36:00.385084] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:25:16.520 [2024-06-10 11:36:00.385098] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:16.520 #4 NEW cov: 11923 ft: 12359 corp: 3/15b lim: 10 exec/s: 0 rss: 72Mb L: 7/7 MS: 1 ChangeBit- 00:25:16.520 [2024-06-10 11:36:00.434938] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a80 cdw11:00000000 00:25:16.520 [2024-06-10 11:36:00.434969] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:16.520 [2024-06-10 11:36:00.435019] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:25:16.520 [2024-06-10 11:36:00.435036] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:16.520 [2024-06-10 11:36:00.435084] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:25:16.520 [2024-06-10 11:36:00.435098] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:16.520 #5 NEW cov: 11929 ft: 12582 corp: 4/22b lim: 10 exec/s: 0 rss: 72Mb L: 7/7 MS: 1 ShuffleBytes- 00:25:16.780 [2024-06-10 11:36:00.485045] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 00:25:16.780 [2024-06-10 11:36:00.485071] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:16.780 [2024-06-10 11:36:00.485135] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:25:16.780 [2024-06-10 11:36:00.485149] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:16.780 [2024-06-10 11:36:00.485198] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:25:16.780 [2024-06-10 11:36:00.485211] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:16.780 #6 NEW cov: 12014 ft: 12780 corp: 5/28b lim: 10 exec/s: 0 rss: 72Mb L: 6/7 MS: 1 EraseBytes- 00:25:16.780 [2024-06-10 11:36:00.535319] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 00:25:16.780 [2024-06-10 11:36:00.535345] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:16.780 [2024-06-10 11:36:00.535419] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00005959 cdw11:00000000 00:25:16.780 [2024-06-10 11:36:00.535433] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:16.780 [2024-06-10 11:36:00.535478] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00005900 cdw11:00000000 00:25:16.780 [2024-06-10 11:36:00.535491] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:16.780 [2024-06-10 11:36:00.535536] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:25:16.780 [2024-06-10 11:36:00.535549] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:16.780 #7 NEW cov: 12014 ft: 13186 corp: 6/37b lim: 10 exec/s: 0 rss: 72Mb L: 9/9 MS: 1 InsertRepeatedBytes- 00:25:16.780 [2024-06-10 11:36:00.585373] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a80 cdw11:00000000 00:25:16.780 [2024-06-10 11:36:00.585399] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:16.780 [2024-06-10 11:36:00.585462] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00002000 cdw11:00000000 00:25:16.780 [2024-06-10 11:36:00.585476] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:16.780 [2024-06-10 11:36:00.585522] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:25:16.780 [2024-06-10 11:36:00.585535] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:16.780 #8 NEW cov: 12014 ft: 13334 corp: 7/44b lim: 10 exec/s: 0 rss: 72Mb L: 7/9 MS: 1 ChangeBit- 00:25:16.780 [2024-06-10 11:36:00.625550] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 00:25:16.780 [2024-06-10 11:36:00.625579] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:16.780 [2024-06-10 11:36:00.625643] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00005959 cdw11:00000000 00:25:16.780 [2024-06-10 11:36:00.625657] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:16.780 [2024-06-10 11:36:00.625705] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00007900 cdw11:00000000 00:25:16.780 [2024-06-10 11:36:00.625718] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:16.780 [2024-06-10 11:36:00.625767] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:25:16.780 [2024-06-10 11:36:00.625781] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:16.780 #9 NEW cov: 12014 ft: 13378 corp: 8/53b lim: 10 exec/s: 0 rss: 73Mb L: 9/9 MS: 1 ChangeBit- 00:25:16.780 [2024-06-10 11:36:00.675757] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 00:25:16.780 [2024-06-10 11:36:00.675783] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:16.780 [2024-06-10 11:36:00.675832] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:25:16.780 [2024-06-10 11:36:00.675846] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:16.780 [2024-06-10 11:36:00.675893] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00002000 cdw11:00000000 00:25:16.780 [2024-06-10 11:36:00.675906] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:16.780 [2024-06-10 11:36:00.675957] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:25:16.780 [2024-06-10 11:36:00.675970] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:16.781 #10 NEW cov: 12014 ft: 13425 corp: 9/61b lim: 10 exec/s: 0 rss: 73Mb L: 8/9 MS: 1 CrossOver- 00:25:16.781 [2024-06-10 11:36:00.715883] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a80 cdw11:00000000 00:25:16.781 [2024-06-10 11:36:00.715910] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:16.781 [2024-06-10 11:36:00.715959] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:25:16.781 [2024-06-10 11:36:00.715972] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:16.781 [2024-06-10 11:36:00.716021] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:25:16.781 [2024-06-10 11:36:00.716034] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:16.781 [2024-06-10 11:36:00.716084] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000000a cdw11:00000000 00:25:16.781 [2024-06-10 11:36:00.716098] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:17.039 #11 NEW cov: 12014 ft: 13501 corp: 10/69b lim: 10 exec/s: 0 rss: 73Mb L: 8/9 MS: 1 CopyPart- 00:25:17.039 [2024-06-10 11:36:00.755885] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a80 cdw11:00000000 00:25:17.039 [2024-06-10 11:36:00.755914] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:17.039 [2024-06-10 11:36:00.755962] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00002000 cdw11:00000000 00:25:17.039 [2024-06-10 11:36:00.755976] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:17.040 [2024-06-10 11:36:00.756021] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:25:17.040 [2024-06-10 11:36:00.756035] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:17.040 #12 NEW cov: 12014 ft: 13547 corp: 11/76b lim: 10 exec/s: 0 rss: 73Mb L: 7/9 MS: 1 ChangeByte- 00:25:17.040 [2024-06-10 11:36:00.806049] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 00:25:17.040 [2024-06-10 11:36:00.806077] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:17.040 [2024-06-10 11:36:00.806142] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:25:17.040 [2024-06-10 11:36:00.806156] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:17.040 [2024-06-10 11:36:00.806204] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:25:17.040 [2024-06-10 11:36:00.806217] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:17.040 #13 NEW cov: 12014 ft: 13569 corp: 12/83b lim: 10 exec/s: 0 rss: 73Mb L: 7/9 MS: 1 ShuffleBytes- 00:25:17.040 [2024-06-10 11:36:00.846165] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 00:25:17.040 [2024-06-10 11:36:00.846193] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:17.040 [2024-06-10 11:36:00.846243] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:25:17.040 [2024-06-10 11:36:00.846262] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:17.040 [2024-06-10 11:36:00.846310] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000400 cdw11:00000000 00:25:17.040 [2024-06-10 11:36:00.846324] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:17.040 #14 NEW cov: 12014 ft: 13586 corp: 13/90b lim: 10 exec/s: 0 rss: 73Mb L: 7/9 MS: 1 CMP- DE: "\004\000"- 00:25:17.040 [2024-06-10 11:36:00.886269] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a80 cdw11:00000000 00:25:17.040 [2024-06-10 11:36:00.886296] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:17.040 [2024-06-10 11:36:00.886344] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:25:17.040 [2024-06-10 11:36:00.886357] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:17.040 [2024-06-10 11:36:00.886405] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000700 cdw11:00000000 00:25:17.040 [2024-06-10 11:36:00.886419] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:17.040 NEW_FUNC[1/1]: 0x1a77150 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:601 00:25:17.040 #15 NEW cov: 12037 ft: 13605 corp: 14/97b lim: 10 exec/s: 0 rss: 73Mb L: 7/9 MS: 1 ChangeBinInt- 00:25:17.040 [2024-06-10 11:36:00.926400] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 00:25:17.040 [2024-06-10 11:36:00.926431] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:17.040 [2024-06-10 11:36:00.926480] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00001000 cdw11:00000000 00:25:17.040 [2024-06-10 11:36:00.926495] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:17.040 [2024-06-10 11:36:00.926544] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:25:17.040 [2024-06-10 11:36:00.926557] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:17.040 #16 NEW cov: 12037 ft: 13651 corp: 15/103b lim: 10 exec/s: 0 rss: 73Mb L: 6/9 MS: 1 ChangeBit- 00:25:17.040 [2024-06-10 11:36:00.966385] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 00:25:17.040 [2024-06-10 11:36:00.966411] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:17.040 [2024-06-10 11:36:00.966460] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:25:17.040 [2024-06-10 11:36:00.966474] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:17.299 #17 NEW cov: 12037 ft: 13874 corp: 16/108b lim: 10 exec/s: 0 rss: 73Mb L: 5/9 MS: 1 EraseBytes- 00:25:17.299 [2024-06-10 11:36:01.016484] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 00:25:17.299 [2024-06-10 11:36:01.016510] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:17.299 [2024-06-10 11:36:01.016558] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:25:17.299 [2024-06-10 11:36:01.016572] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:17.299 #18 NEW cov: 12037 ft: 13891 corp: 17/113b lim: 10 exec/s: 18 rss: 73Mb L: 5/9 MS: 1 EraseBytes- 00:25:17.299 [2024-06-10 11:36:01.066875] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a80 cdw11:00000000 00:25:17.299 [2024-06-10 11:36:01.066900] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:17.299 [2024-06-10 11:36:01.066947] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:25:17.299 [2024-06-10 11:36:01.066961] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:17.299 [2024-06-10 11:36:01.067010] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:25:17.299 [2024-06-10 11:36:01.067023] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:17.299 [2024-06-10 11:36:01.067071] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000008a cdw11:00000000 00:25:17.299 [2024-06-10 11:36:01.067084] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:17.299 #19 NEW cov: 12037 ft: 13909 corp: 18/121b lim: 10 exec/s: 19 rss: 73Mb L: 8/9 MS: 1 ChangeBit- 00:25:17.299 [2024-06-10 11:36:01.117010] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00002c00 cdw11:00000000 00:25:17.299 [2024-06-10 11:36:01.117037] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:17.299 [2024-06-10 11:36:01.117088] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00005959 cdw11:00000000 00:25:17.299 [2024-06-10 11:36:01.117105] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:17.299 [2024-06-10 11:36:01.117153] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00007900 cdw11:00000000 00:25:17.299 [2024-06-10 11:36:01.117167] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:17.299 [2024-06-10 11:36:01.117216] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:25:17.299 [2024-06-10 11:36:01.117230] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:17.299 #20 NEW cov: 12037 ft: 13919 corp: 19/130b lim: 10 exec/s: 20 rss: 73Mb L: 9/9 MS: 1 ChangeByte- 00:25:17.299 [2024-06-10 11:36:01.167134] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a80 cdw11:00000000 00:25:17.299 [2024-06-10 11:36:01.167159] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:17.299 [2024-06-10 11:36:01.167208] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00002000 cdw11:00000000 00:25:17.299 [2024-06-10 11:36:01.167222] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:17.299 [2024-06-10 11:36:01.167273] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000002e cdw11:00000000 00:25:17.299 [2024-06-10 11:36:01.167287] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:17.299 [2024-06-10 11:36:01.167333] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000032 cdw11:00000000 00:25:17.299 [2024-06-10 11:36:01.167345] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:17.299 #21 NEW cov: 12037 ft: 13945 corp: 20/138b lim: 10 exec/s: 21 rss: 73Mb L: 8/9 MS: 1 InsertByte- 00:25:17.299 [2024-06-10 11:36:01.216946] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00004a2a cdw11:00000000 00:25:17.299 [2024-06-10 11:36:01.216971] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:17.299 #25 NEW cov: 12037 ft: 14230 corp: 21/140b lim: 10 exec/s: 25 rss: 73Mb L: 2/9 MS: 4 ShuffleBytes-ChangeBinInt-ChangeByte-InsertByte- 00:25:17.558 [2024-06-10 11:36:01.257297] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a80 cdw11:00000000 00:25:17.558 [2024-06-10 11:36:01.257323] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:17.558 [2024-06-10 11:36:01.257370] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00002000 cdw11:00000000 00:25:17.558 [2024-06-10 11:36:01.257384] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:17.558 [2024-06-10 11:36:01.257430] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:25:17.558 [2024-06-10 11:36:01.257443] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:17.558 #26 NEW cov: 12037 ft: 14272 corp: 22/147b lim: 10 exec/s: 26 rss: 73Mb L: 7/9 MS: 1 ChangeASCIIInt- 00:25:17.558 [2024-06-10 11:36:01.297514] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 00:25:17.558 [2024-06-10 11:36:01.297539] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:17.558 [2024-06-10 11:36:01.297589] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00005959 cdw11:00000000 00:25:17.558 [2024-06-10 11:36:01.297603] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:17.558 [2024-06-10 11:36:01.297651] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000400 cdw11:00000000 00:25:17.558 [2024-06-10 11:36:01.297664] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:17.558 [2024-06-10 11:36:01.297711] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:25:17.558 [2024-06-10 11:36:01.297724] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:17.558 #27 NEW cov: 12037 ft: 14277 corp: 23/156b lim: 10 exec/s: 27 rss: 73Mb L: 9/9 MS: 1 PersAutoDict- DE: "\004\000"- 00:25:17.558 [2024-06-10 11:36:01.337531] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 00:25:17.558 [2024-06-10 11:36:01.337557] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:17.558 [2024-06-10 11:36:01.337603] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000400 cdw11:00000000 00:25:17.558 [2024-06-10 11:36:01.337618] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:17.558 [2024-06-10 11:36:01.337666] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:25:17.558 [2024-06-10 11:36:01.337679] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:17.558 #28 NEW cov: 12037 ft: 14288 corp: 24/163b lim: 10 exec/s: 28 rss: 73Mb L: 7/9 MS: 1 ChangeBit- 00:25:17.558 [2024-06-10 11:36:01.377755] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 00:25:17.558 [2024-06-10 11:36:01.377781] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:17.558 [2024-06-10 11:36:01.377831] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:25:17.558 [2024-06-10 11:36:01.377844] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:17.558 [2024-06-10 11:36:01.377891] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00002000 cdw11:00000000 00:25:17.558 [2024-06-10 11:36:01.377904] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:17.558 [2024-06-10 11:36:01.377952] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000800 cdw11:00000000 00:25:17.558 [2024-06-10 11:36:01.377965] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:17.558 #29 NEW cov: 12037 ft: 14307 corp: 25/171b lim: 10 exec/s: 29 rss: 73Mb L: 8/9 MS: 1 ChangeBinInt- 00:25:17.558 [2024-06-10 11:36:01.417745] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a80 cdw11:00000000 00:25:17.558 [2024-06-10 11:36:01.417770] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:17.558 [2024-06-10 11:36:01.417821] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:25:17.558 [2024-06-10 11:36:01.417835] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:17.558 [2024-06-10 11:36:01.417883] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:25:17.558 [2024-06-10 11:36:01.417899] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:17.558 #30 NEW cov: 12037 ft: 14318 corp: 26/178b lim: 10 exec/s: 30 rss: 73Mb L: 7/9 MS: 1 ShuffleBytes- 00:25:17.558 [2024-06-10 11:36:01.457934] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 00:25:17.558 [2024-06-10 11:36:01.457959] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:17.558 [2024-06-10 11:36:01.458008] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:25:17.558 [2024-06-10 11:36:01.458021] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:17.558 [2024-06-10 11:36:01.458071] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000459 cdw11:00000000 00:25:17.558 [2024-06-10 11:36:01.458084] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:17.558 [2024-06-10 11:36:01.458130] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00005900 cdw11:00000000 00:25:17.558 [2024-06-10 11:36:01.458143] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:17.558 #31 NEW cov: 12037 ft: 14336 corp: 27/187b lim: 10 exec/s: 31 rss: 73Mb L: 9/9 MS: 1 CrossOver- 00:25:17.818 [2024-06-10 11:36:01.508156] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 00:25:17.818 [2024-06-10 11:36:01.508183] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:17.818 [2024-06-10 11:36:01.508231] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:25:17.818 [2024-06-10 11:36:01.508249] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:17.818 [2024-06-10 11:36:01.508299] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00003259 cdw11:00000000 00:25:17.818 [2024-06-10 11:36:01.508313] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:17.818 [2024-06-10 11:36:01.508361] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00005900 cdw11:00000000 00:25:17.818 [2024-06-10 11:36:01.508375] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:17.818 #32 NEW cov: 12037 ft: 14353 corp: 28/196b lim: 10 exec/s: 32 rss: 74Mb L: 9/9 MS: 1 CrossOver- 00:25:17.818 [2024-06-10 11:36:01.558142] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000b00 cdw11:00000000 00:25:17.818 [2024-06-10 11:36:01.558167] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:17.818 [2024-06-10 11:36:01.558217] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:25:17.818 [2024-06-10 11:36:01.558230] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:17.818 [2024-06-10 11:36:01.558280] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:25:17.818 [2024-06-10 11:36:01.558293] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:17.818 #33 NEW cov: 12037 ft: 14363 corp: 29/202b lim: 10 exec/s: 33 rss: 74Mb L: 6/9 MS: 1 ChangeBit- 00:25:17.818 [2024-06-10 11:36:01.598503] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a80 cdw11:00000000 00:25:17.818 [2024-06-10 11:36:01.598531] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:17.818 [2024-06-10 11:36:01.598581] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:25:17.818 [2024-06-10 11:36:01.598594] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:17.818 [2024-06-10 11:36:01.598643] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000400 cdw11:00000000 00:25:17.818 [2024-06-10 11:36:01.598657] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:17.818 [2024-06-10 11:36:01.598704] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:25:17.818 [2024-06-10 11:36:01.598717] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:17.818 [2024-06-10 11:36:01.598765] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:0000008a cdw11:00000000 00:25:17.818 [2024-06-10 11:36:01.598778] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:17.818 #34 NEW cov: 12037 ft: 14443 corp: 30/212b lim: 10 exec/s: 34 rss: 74Mb L: 10/10 MS: 1 PersAutoDict- DE: "\004\000"- 00:25:17.818 [2024-06-10 11:36:01.648401] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000e900 cdw11:00000000 00:25:17.818 [2024-06-10 11:36:01.648427] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:17.818 [2024-06-10 11:36:01.648474] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:25:17.818 [2024-06-10 11:36:01.648488] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:17.818 [2024-06-10 11:36:01.648536] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:25:17.818 [2024-06-10 11:36:01.648549] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:17.818 #35 NEW cov: 12037 ft: 14447 corp: 31/218b lim: 10 exec/s: 35 rss: 74Mb L: 6/10 MS: 1 ChangeByte- 00:25:17.818 [2024-06-10 11:36:01.698310] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000400 cdw11:00000000 00:25:17.818 [2024-06-10 11:36:01.698335] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:17.818 #36 NEW cov: 12037 ft: 14469 corp: 32/220b lim: 10 exec/s: 36 rss: 74Mb L: 2/10 MS: 1 PersAutoDict- DE: "\004\000"- 00:25:17.818 [2024-06-10 11:36:01.748670] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 00:25:17.818 [2024-06-10 11:36:01.748696] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:17.818 [2024-06-10 11:36:01.748746] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:25:17.818 [2024-06-10 11:36:01.748760] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:17.818 [2024-06-10 11:36:01.748808] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ff00 cdw11:00000000 00:25:17.818 [2024-06-10 11:36:01.748822] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:18.076 #37 NEW cov: 12037 ft: 14492 corp: 33/226b lim: 10 exec/s: 37 rss: 74Mb L: 6/10 MS: 1 InsertByte- 00:25:18.076 [2024-06-10 11:36:01.799038] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 00:25:18.076 [2024-06-10 11:36:01.799067] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:18.076 [2024-06-10 11:36:01.799117] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:25:18.076 [2024-06-10 11:36:01.799131] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:18.076 [2024-06-10 11:36:01.799179] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000459 cdw11:00000000 00:25:18.076 [2024-06-10 11:36:01.799209] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:18.076 [2024-06-10 11:36:01.799260] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00005900 cdw11:00000000 00:25:18.076 [2024-06-10 11:36:01.799275] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:18.076 [2024-06-10 11:36:01.799325] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 00:25:18.076 [2024-06-10 11:36:01.799339] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:18.076 #38 NEW cov: 12037 ft: 14511 corp: 34/236b lim: 10 exec/s: 38 rss: 74Mb L: 10/10 MS: 1 CopyPart- 00:25:18.076 [2024-06-10 11:36:01.839032] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 00:25:18.076 [2024-06-10 11:36:01.839057] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:18.076 [2024-06-10 11:36:01.839105] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00005959 cdw11:00000000 00:25:18.077 [2024-06-10 11:36:01.839119] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:18.077 [2024-06-10 11:36:01.839168] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000400 cdw11:00000000 00:25:18.077 [2024-06-10 11:36:01.839182] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:18.077 [2024-06-10 11:36:01.839229] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:25:18.077 [2024-06-10 11:36:01.839242] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:18.077 #39 NEW cov: 12037 ft: 14515 corp: 35/245b lim: 10 exec/s: 39 rss: 74Mb L: 9/10 MS: 1 PersAutoDict- DE: "\004\000"- 00:25:18.077 [2024-06-10 11:36:01.879109] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a80 cdw11:00000000 00:25:18.077 [2024-06-10 11:36:01.879134] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:18.077 [2024-06-10 11:36:01.879186] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000080 cdw11:00000000 00:25:18.077 [2024-06-10 11:36:01.879200] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:18.077 [2024-06-10 11:36:01.879253] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:25:18.077 [2024-06-10 11:36:01.879267] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:18.077 [2024-06-10 11:36:01.879315] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000008a cdw11:00000000 00:25:18.077 [2024-06-10 11:36:01.879328] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:18.077 #40 NEW cov: 12037 ft: 14527 corp: 36/253b lim: 10 exec/s: 40 rss: 74Mb L: 8/10 MS: 1 CopyPart- 00:25:18.077 [2024-06-10 11:36:01.919275] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a80 cdw11:00000000 00:25:18.077 [2024-06-10 11:36:01.919301] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:18.077 [2024-06-10 11:36:01.919350] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00002000 cdw11:00000000 00:25:18.077 [2024-06-10 11:36:01.919364] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:18.077 [2024-06-10 11:36:01.919414] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00002e00 cdw11:00000000 00:25:18.077 [2024-06-10 11:36:01.919428] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:18.077 [2024-06-10 11:36:01.919477] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000032 cdw11:00000000 00:25:18.077 [2024-06-10 11:36:01.919490] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:18.077 #41 NEW cov: 12037 ft: 14553 corp: 37/261b lim: 10 exec/s: 41 rss: 74Mb L: 8/10 MS: 1 ShuffleBytes- 00:25:18.077 [2024-06-10 11:36:01.969175] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a80 cdw11:00000000 00:25:18.077 [2024-06-10 11:36:01.969201] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:18.077 [2024-06-10 11:36:01.969256] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000030 cdw11:00000000 00:25:18.077 [2024-06-10 11:36:01.969270] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:18.077 #42 NEW cov: 12037 ft: 14591 corp: 38/265b lim: 10 exec/s: 42 rss: 74Mb L: 4/10 MS: 1 EraseBytes- 00:25:18.077 [2024-06-10 11:36:02.019577] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 00:25:18.077 [2024-06-10 11:36:02.019602] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:18.077 [2024-06-10 11:36:02.019650] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:25:18.077 [2024-06-10 11:36:02.019664] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:18.077 [2024-06-10 11:36:02.019711] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00007d00 cdw11:00000000 00:25:18.077 [2024-06-10 11:36:02.019724] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:18.077 [2024-06-10 11:36:02.019774] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:25:18.077 [2024-06-10 11:36:02.019788] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:18.335 #43 NEW cov: 12037 ft: 14642 corp: 39/273b lim: 10 exec/s: 21 rss: 74Mb L: 8/10 MS: 1 InsertByte- 00:25:18.335 #43 DONE cov: 12037 ft: 14642 corp: 39/273b lim: 10 exec/s: 21 rss: 74Mb 00:25:18.335 ###### Recommended dictionary. ###### 00:25:18.336 "\004\000" # Uses: 4 00:25:18.336 ###### End of recommended dictionary. ###### 00:25:18.336 Done 43 runs in 2 second(s) 00:25:18.336 11:36:02 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_7.conf /var/tmp/suppress_nvmf_fuzz 00:25:18.336 11:36:02 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:25:18.336 11:36:02 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:25:18.336 11:36:02 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 8 1 0x1 00:25:18.336 11:36:02 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=8 00:25:18.336 11:36:02 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:25:18.336 11:36:02 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:25:18.336 11:36:02 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 00:25:18.336 11:36:02 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_8.conf 00:25:18.336 11:36:02 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:25:18.336 11:36:02 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:25:18.336 11:36:02 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 8 00:25:18.336 11:36:02 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4408 00:25:18.336 11:36:02 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 00:25:18.336 11:36:02 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4408' 00:25:18.336 11:36:02 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4408"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:25:18.336 11:36:02 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:25:18.336 11:36:02 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:25:18.336 11:36:02 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4408' -c /tmp/fuzz_json_8.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 -Z 8 00:25:18.336 [2024-06-10 11:36:02.217856] Starting SPDK v24.09-pre git sha1 a3f6419f1 / DPDK 24.03.0 initialization... 00:25:18.336 [2024-06-10 11:36:02.217929] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3331348 ] 00:25:18.336 EAL: No free 2048 kB hugepages reported on node 1 00:25:18.595 [2024-06-10 11:36:02.526292] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:18.853 [2024-06-10 11:36:02.628328] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:25:18.853 [2024-06-10 11:36:02.688602] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:18.853 [2024-06-10 11:36:02.704855] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4408 *** 00:25:18.853 INFO: Running with entropic power schedule (0xFF, 100). 00:25:18.853 INFO: Seed: 2800974910 00:25:18.854 INFO: Loaded 1 modules (357716 inline 8-bit counters): 357716 [0x29a784c, 0x29feda0), 00:25:18.854 INFO: Loaded 1 PC tables (357716 PCs): 357716 [0x29feda0,0x2f742e0), 00:25:18.854 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 00:25:18.854 INFO: A corpus is not provided, starting from an empty corpus 00:25:18.854 [2024-06-10 11:36:02.782031] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.854 [2024-06-10 11:36:02.782088] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:18.854 #2 INITED cov: 11821 ft: 11821 corp: 1/1b exec/s: 0 rss: 70Mb 00:25:19.113 [2024-06-10 11:36:02.833132] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.113 [2024-06-10 11:36:02.833165] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:19.113 [2024-06-10 11:36:02.833307] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.113 [2024-06-10 11:36:02.833330] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:19.113 [2024-06-10 11:36:02.833472] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.113 [2024-06-10 11:36:02.833492] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:19.113 [2024-06-10 11:36:02.833619] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.113 [2024-06-10 11:36:02.833639] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:19.113 [2024-06-10 11:36:02.833764] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.113 [2024-06-10 11:36:02.833782] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:19.113 #3 NEW cov: 11951 ft: 13400 corp: 2/6b lim: 5 exec/s: 0 rss: 71Mb L: 5/5 MS: 1 InsertRepeatedBytes- 00:25:19.113 [2024-06-10 11:36:02.902420] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.113 [2024-06-10 11:36:02.902452] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:19.113 [2024-06-10 11:36:02.902578] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.113 [2024-06-10 11:36:02.902598] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:19.113 [2024-06-10 11:36:02.902713] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.113 [2024-06-10 11:36:02.902731] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:19.113 #4 NEW cov: 11957 ft: 13829 corp: 3/9b lim: 5 exec/s: 0 rss: 71Mb L: 3/5 MS: 1 CMP- DE: "\000\000"- 00:25:19.113 [2024-06-10 11:36:02.953157] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.113 [2024-06-10 11:36:02.953189] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:19.113 [2024-06-10 11:36:02.953323] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.113 [2024-06-10 11:36:02.953342] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:19.113 [2024-06-10 11:36:02.953465] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.113 [2024-06-10 11:36:02.953484] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:19.113 [2024-06-10 11:36:02.953612] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.113 [2024-06-10 11:36:02.953631] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:19.113 [2024-06-10 11:36:02.953750] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.113 [2024-06-10 11:36:02.953775] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:19.113 #5 NEW cov: 12042 ft: 14119 corp: 4/14b lim: 5 exec/s: 0 rss: 71Mb L: 5/5 MS: 1 ChangeBinInt- 00:25:19.113 [2024-06-10 11:36:03.023371] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.113 [2024-06-10 11:36:03.023401] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:19.113 [2024-06-10 11:36:03.023518] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.113 [2024-06-10 11:36:03.023535] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:19.113 [2024-06-10 11:36:03.023654] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.113 [2024-06-10 11:36:03.023672] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:19.113 [2024-06-10 11:36:03.023807] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.113 [2024-06-10 11:36:03.023827] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:19.113 [2024-06-10 11:36:03.023951] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.113 [2024-06-10 11:36:03.023971] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:19.113 #6 NEW cov: 12042 ft: 14209 corp: 5/19b lim: 5 exec/s: 0 rss: 71Mb L: 5/5 MS: 1 CopyPart- 00:25:19.373 [2024-06-10 11:36:03.093602] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.373 [2024-06-10 11:36:03.093634] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:19.373 [2024-06-10 11:36:03.093772] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.373 [2024-06-10 11:36:03.093795] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:19.373 [2024-06-10 11:36:03.093915] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.373 [2024-06-10 11:36:03.093934] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:19.373 [2024-06-10 11:36:03.094074] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.373 [2024-06-10 11:36:03.094094] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:19.373 [2024-06-10 11:36:03.094216] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.373 [2024-06-10 11:36:03.094236] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:19.373 #7 NEW cov: 12042 ft: 14299 corp: 6/24b lim: 5 exec/s: 0 rss: 71Mb L: 5/5 MS: 1 ChangeByte- 00:25:19.373 [2024-06-10 11:36:03.143911] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.373 [2024-06-10 11:36:03.143944] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:19.373 [2024-06-10 11:36:03.144066] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.373 [2024-06-10 11:36:03.144086] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:19.373 [2024-06-10 11:36:03.144222] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.373 [2024-06-10 11:36:03.144242] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:19.373 [2024-06-10 11:36:03.144386] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.373 [2024-06-10 11:36:03.144407] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:19.373 [2024-06-10 11:36:03.144528] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.373 [2024-06-10 11:36:03.144547] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:19.373 #8 NEW cov: 12042 ft: 14376 corp: 7/29b lim: 5 exec/s: 0 rss: 71Mb L: 5/5 MS: 1 ChangeBit- 00:25:19.373 [2024-06-10 11:36:03.214082] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.373 [2024-06-10 11:36:03.214113] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:19.373 [2024-06-10 11:36:03.214242] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.373 [2024-06-10 11:36:03.214267] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:19.373 [2024-06-10 11:36:03.214400] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.373 [2024-06-10 11:36:03.214419] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:19.373 [2024-06-10 11:36:03.214541] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.373 [2024-06-10 11:36:03.214560] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:19.373 [2024-06-10 11:36:03.214679] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.373 [2024-06-10 11:36:03.214699] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:19.373 #9 NEW cov: 12042 ft: 14439 corp: 8/34b lim: 5 exec/s: 0 rss: 72Mb L: 5/5 MS: 1 CrossOver- 00:25:19.373 [2024-06-10 11:36:03.264168] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.373 [2024-06-10 11:36:03.264198] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:19.373 [2024-06-10 11:36:03.264328] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.373 [2024-06-10 11:36:03.264353] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:19.374 [2024-06-10 11:36:03.264480] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.374 [2024-06-10 11:36:03.264497] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:19.374 [2024-06-10 11:36:03.264624] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.374 [2024-06-10 11:36:03.264641] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:19.374 [2024-06-10 11:36:03.264771] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.374 [2024-06-10 11:36:03.264791] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:19.374 #10 NEW cov: 12042 ft: 14517 corp: 9/39b lim: 5 exec/s: 0 rss: 72Mb L: 5/5 MS: 1 ShuffleBytes- 00:25:19.633 [2024-06-10 11:36:03.334332] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.633 [2024-06-10 11:36:03.334366] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:19.633 [2024-06-10 11:36:03.334497] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.633 [2024-06-10 11:36:03.334518] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:19.633 [2024-06-10 11:36:03.334644] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.633 [2024-06-10 11:36:03.334666] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:19.633 [2024-06-10 11:36:03.334798] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.633 [2024-06-10 11:36:03.334818] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:19.633 [2024-06-10 11:36:03.334940] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.633 [2024-06-10 11:36:03.334957] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:19.633 #11 NEW cov: 12042 ft: 14629 corp: 10/44b lim: 5 exec/s: 0 rss: 72Mb L: 5/5 MS: 1 PersAutoDict- DE: "\000\000"- 00:25:19.633 [2024-06-10 11:36:03.384251] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.633 [2024-06-10 11:36:03.384285] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:19.633 [2024-06-10 11:36:03.384412] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.633 [2024-06-10 11:36:03.384432] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:19.633 [2024-06-10 11:36:03.384583] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.633 [2024-06-10 11:36:03.384605] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:19.633 [2024-06-10 11:36:03.384728] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.633 [2024-06-10 11:36:03.384748] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:19.633 #12 NEW cov: 12042 ft: 14669 corp: 11/48b lim: 5 exec/s: 0 rss: 72Mb L: 4/5 MS: 1 EraseBytes- 00:25:19.633 [2024-06-10 11:36:03.454753] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.633 [2024-06-10 11:36:03.454786] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:19.633 [2024-06-10 11:36:03.454918] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.633 [2024-06-10 11:36:03.454940] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:19.633 [2024-06-10 11:36:03.455061] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.633 [2024-06-10 11:36:03.455080] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:19.633 [2024-06-10 11:36:03.455206] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.633 [2024-06-10 11:36:03.455226] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:19.633 [2024-06-10 11:36:03.455373] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.633 [2024-06-10 11:36:03.455392] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:19.633 #13 NEW cov: 12042 ft: 14688 corp: 12/53b lim: 5 exec/s: 0 rss: 72Mb L: 5/5 MS: 1 CopyPart- 00:25:19.633 [2024-06-10 11:36:03.504920] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.633 [2024-06-10 11:36:03.504952] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:19.633 [2024-06-10 11:36:03.505083] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.633 [2024-06-10 11:36:03.505105] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:19.633 [2024-06-10 11:36:03.505222] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.633 [2024-06-10 11:36:03.505242] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:19.633 [2024-06-10 11:36:03.505296] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.633 [2024-06-10 11:36:03.505307] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:19.633 [2024-06-10 11:36:03.505324] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.633 [2024-06-10 11:36:03.505336] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:19.633 #14 NEW cov: 12051 ft: 14746 corp: 13/58b lim: 5 exec/s: 0 rss: 72Mb L: 5/5 MS: 1 ChangeByte- 00:25:19.633 [2024-06-10 11:36:03.555088] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.633 [2024-06-10 11:36:03.555120] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:19.633 [2024-06-10 11:36:03.555250] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.633 [2024-06-10 11:36:03.555269] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:19.633 [2024-06-10 11:36:03.555395] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.633 [2024-06-10 11:36:03.555415] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:19.633 [2024-06-10 11:36:03.555535] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.633 [2024-06-10 11:36:03.555554] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:19.634 [2024-06-10 11:36:03.555685] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.634 [2024-06-10 11:36:03.555705] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:19.893 #15 NEW cov: 12051 ft: 14814 corp: 14/63b lim: 5 exec/s: 0 rss: 72Mb L: 5/5 MS: 1 ChangeBinInt- 00:25:19.893 [2024-06-10 11:36:03.625294] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.893 [2024-06-10 11:36:03.625325] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:19.893 [2024-06-10 11:36:03.625455] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.893 [2024-06-10 11:36:03.625473] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:19.893 [2024-06-10 11:36:03.625595] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.893 [2024-06-10 11:36:03.625615] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:19.893 [2024-06-10 11:36:03.625749] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.893 [2024-06-10 11:36:03.625768] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:19.893 [2024-06-10 11:36:03.625896] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.893 [2024-06-10 11:36:03.625914] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:20.152 NEW_FUNC[1/1]: 0x1a77150 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:601 00:25:20.152 #16 NEW cov: 12074 ft: 14870 corp: 15/68b lim: 5 exec/s: 16 rss: 73Mb L: 5/5 MS: 1 CrossOver- 00:25:20.152 [2024-06-10 11:36:03.966894] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.152 [2024-06-10 11:36:03.966939] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:20.152 [2024-06-10 11:36:03.967031] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.152 [2024-06-10 11:36:03.967048] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:20.152 [2024-06-10 11:36:03.967136] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.152 [2024-06-10 11:36:03.967156] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:20.152 [2024-06-10 11:36:03.967250] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.152 [2024-06-10 11:36:03.967267] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:20.152 [2024-06-10 11:36:03.967366] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.152 [2024-06-10 11:36:03.967394] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:20.152 #17 NEW cov: 12074 ft: 15032 corp: 16/73b lim: 5 exec/s: 17 rss: 73Mb L: 5/5 MS: 1 ChangeBinInt- 00:25:20.152 [2024-06-10 11:36:04.037491] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.152 [2024-06-10 11:36:04.037522] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:20.152 [2024-06-10 11:36:04.037625] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.152 [2024-06-10 11:36:04.037643] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:20.152 [2024-06-10 11:36:04.037722] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.152 [2024-06-10 11:36:04.037737] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:20.152 [2024-06-10 11:36:04.037819] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.152 [2024-06-10 11:36:04.037835] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:20.152 [2024-06-10 11:36:04.037921] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.152 [2024-06-10 11:36:04.037937] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:20.152 #18 NEW cov: 12074 ft: 15045 corp: 17/78b lim: 5 exec/s: 18 rss: 73Mb L: 5/5 MS: 1 ChangeByte- 00:25:20.152 [2024-06-10 11:36:04.087821] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.152 [2024-06-10 11:36:04.087850] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:20.152 [2024-06-10 11:36:04.087939] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.152 [2024-06-10 11:36:04.087959] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:20.152 [2024-06-10 11:36:04.088040] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.152 [2024-06-10 11:36:04.088055] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:20.152 [2024-06-10 11:36:04.088138] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.152 [2024-06-10 11:36:04.088155] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:20.152 [2024-06-10 11:36:04.088263] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.152 [2024-06-10 11:36:04.088280] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:20.411 #19 NEW cov: 12074 ft: 15132 corp: 18/83b lim: 5 exec/s: 19 rss: 73Mb L: 5/5 MS: 1 ChangeBit- 00:25:20.411 [2024-06-10 11:36:04.138253] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.411 [2024-06-10 11:36:04.138283] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:20.411 [2024-06-10 11:36:04.138374] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.411 [2024-06-10 11:36:04.138392] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:20.411 [2024-06-10 11:36:04.138476] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.411 [2024-06-10 11:36:04.138492] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:20.411 [2024-06-10 11:36:04.138585] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.411 [2024-06-10 11:36:04.138601] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:20.411 [2024-06-10 11:36:04.138696] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.411 [2024-06-10 11:36:04.138712] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:20.411 #20 NEW cov: 12074 ft: 15141 corp: 19/88b lim: 5 exec/s: 20 rss: 73Mb L: 5/5 MS: 1 ShuffleBytes- 00:25:20.411 [2024-06-10 11:36:04.188731] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.411 [2024-06-10 11:36:04.188758] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:20.411 [2024-06-10 11:36:04.188846] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.411 [2024-06-10 11:36:04.188864] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:20.411 [2024-06-10 11:36:04.188953] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.411 [2024-06-10 11:36:04.188972] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:20.411 [2024-06-10 11:36:04.189055] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.412 [2024-06-10 11:36:04.189071] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:20.412 [2024-06-10 11:36:04.189154] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.412 [2024-06-10 11:36:04.189170] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:20.412 #21 NEW cov: 12074 ft: 15156 corp: 20/93b lim: 5 exec/s: 21 rss: 73Mb L: 5/5 MS: 1 PersAutoDict- DE: "\000\000"- 00:25:20.412 [2024-06-10 11:36:04.257447] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.412 [2024-06-10 11:36:04.257475] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:20.412 #22 NEW cov: 12074 ft: 15182 corp: 21/94b lim: 5 exec/s: 22 rss: 73Mb L: 1/5 MS: 1 CrossOver- 00:25:20.412 [2024-06-10 11:36:04.309348] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.412 [2024-06-10 11:36:04.309377] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:20.412 [2024-06-10 11:36:04.309480] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.412 [2024-06-10 11:36:04.309498] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:20.412 [2024-06-10 11:36:04.309591] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.412 [2024-06-10 11:36:04.309608] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:20.412 [2024-06-10 11:36:04.309696] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.412 [2024-06-10 11:36:04.309711] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:20.412 [2024-06-10 11:36:04.309801] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.412 [2024-06-10 11:36:04.309816] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:20.412 #23 NEW cov: 12074 ft: 15191 corp: 22/99b lim: 5 exec/s: 23 rss: 73Mb L: 5/5 MS: 1 ShuffleBytes- 00:25:20.671 [2024-06-10 11:36:04.377994] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.671 [2024-06-10 11:36:04.378024] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:20.671 #24 NEW cov: 12074 ft: 15216 corp: 23/100b lim: 5 exec/s: 24 rss: 73Mb L: 1/5 MS: 1 CrossOver- 00:25:20.671 [2024-06-10 11:36:04.429863] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.671 [2024-06-10 11:36:04.429891] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:20.671 [2024-06-10 11:36:04.429993] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.671 [2024-06-10 11:36:04.430010] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:20.671 [2024-06-10 11:36:04.430097] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.671 [2024-06-10 11:36:04.430113] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:20.671 [2024-06-10 11:36:04.430202] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.671 [2024-06-10 11:36:04.430218] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:20.671 [2024-06-10 11:36:04.430310] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.671 [2024-06-10 11:36:04.430327] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:20.671 #25 NEW cov: 12074 ft: 15230 corp: 24/105b lim: 5 exec/s: 25 rss: 73Mb L: 5/5 MS: 1 ChangeBinInt- 00:25:20.671 [2024-06-10 11:36:04.500192] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.671 [2024-06-10 11:36:04.500221] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:20.671 [2024-06-10 11:36:04.500318] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.671 [2024-06-10 11:36:04.500335] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:20.671 [2024-06-10 11:36:04.500423] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.671 [2024-06-10 11:36:04.500438] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:20.671 [2024-06-10 11:36:04.500527] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.671 [2024-06-10 11:36:04.500544] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:20.671 [2024-06-10 11:36:04.500635] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.671 [2024-06-10 11:36:04.500652] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:20.671 #26 NEW cov: 12074 ft: 15245 corp: 25/110b lim: 5 exec/s: 26 rss: 74Mb L: 5/5 MS: 1 ChangeBinInt- 00:25:20.671 [2024-06-10 11:36:04.570522] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.671 [2024-06-10 11:36:04.570551] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:20.671 [2024-06-10 11:36:04.570637] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.671 [2024-06-10 11:36:04.570652] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:20.671 [2024-06-10 11:36:04.570744] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.671 [2024-06-10 11:36:04.570762] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:20.671 [2024-06-10 11:36:04.570852] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.671 [2024-06-10 11:36:04.570868] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:20.671 [2024-06-10 11:36:04.570951] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.671 [2024-06-10 11:36:04.570967] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:20.671 #27 NEW cov: 12074 ft: 15303 corp: 26/115b lim: 5 exec/s: 27 rss: 74Mb L: 5/5 MS: 1 ShuffleBytes- 00:25:20.931 [2024-06-10 11:36:04.640861] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.931 [2024-06-10 11:36:04.640891] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:20.931 [2024-06-10 11:36:04.640985] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.931 [2024-06-10 11:36:04.641003] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:20.931 [2024-06-10 11:36:04.641092] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.931 [2024-06-10 11:36:04.641109] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:20.931 [2024-06-10 11:36:04.641199] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.931 [2024-06-10 11:36:04.641216] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:20.931 [2024-06-10 11:36:04.641312] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.931 [2024-06-10 11:36:04.641328] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:20.931 #28 NEW cov: 12074 ft: 15326 corp: 27/120b lim: 5 exec/s: 28 rss: 74Mb L: 5/5 MS: 1 CopyPart- 00:25:20.931 [2024-06-10 11:36:04.691011] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.931 [2024-06-10 11:36:04.691042] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:20.931 [2024-06-10 11:36:04.691132] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.931 [2024-06-10 11:36:04.691150] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:20.931 [2024-06-10 11:36:04.691250] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.931 [2024-06-10 11:36:04.691266] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:20.931 [2024-06-10 11:36:04.691352] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.931 [2024-06-10 11:36:04.691368] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:20.931 [2024-06-10 11:36:04.691454] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.931 [2024-06-10 11:36:04.691471] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:20.931 #29 NEW cov: 12074 ft: 15365 corp: 28/125b lim: 5 exec/s: 29 rss: 74Mb L: 5/5 MS: 1 CopyPart- 00:25:20.931 [2024-06-10 11:36:04.740528] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.931 [2024-06-10 11:36:04.740559] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:20.931 [2024-06-10 11:36:04.740642] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.931 [2024-06-10 11:36:04.740659] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:20.931 [2024-06-10 11:36:04.740747] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.931 [2024-06-10 11:36:04.740762] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:20.931 #30 NEW cov: 12074 ft: 15389 corp: 29/128b lim: 5 exec/s: 15 rss: 74Mb L: 3/5 MS: 1 EraseBytes- 00:25:20.931 #30 DONE cov: 12074 ft: 15389 corp: 29/128b lim: 5 exec/s: 15 rss: 74Mb 00:25:20.931 ###### Recommended dictionary. ###### 00:25:20.931 "\000\000" # Uses: 2 00:25:20.931 ###### End of recommended dictionary. ###### 00:25:20.931 Done 30 runs in 2 second(s) 00:25:21.189 11:36:04 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_8.conf /var/tmp/suppress_nvmf_fuzz 00:25:21.189 11:36:04 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:25:21.189 11:36:04 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:25:21.189 11:36:04 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 9 1 0x1 00:25:21.189 11:36:04 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=9 00:25:21.189 11:36:04 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:25:21.189 11:36:04 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:25:21.189 11:36:04 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 00:25:21.189 11:36:04 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_9.conf 00:25:21.189 11:36:04 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:25:21.189 11:36:04 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:25:21.189 11:36:04 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 9 00:25:21.189 11:36:04 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4409 00:25:21.189 11:36:04 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 00:25:21.190 11:36:04 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4409' 00:25:21.190 11:36:04 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4409"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:25:21.190 11:36:04 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:25:21.190 11:36:04 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:25:21.190 11:36:04 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4409' -c /tmp/fuzz_json_9.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 -Z 9 00:25:21.190 [2024-06-10 11:36:04.950961] Starting SPDK v24.09-pre git sha1 a3f6419f1 / DPDK 24.03.0 initialization... 00:25:21.190 [2024-06-10 11:36:04.951042] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3331940 ] 00:25:21.190 EAL: No free 2048 kB hugepages reported on node 1 00:25:21.448 [2024-06-10 11:36:05.236130] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:21.448 [2024-06-10 11:36:05.332014] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:25:21.448 [2024-06-10 11:36:05.392334] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:21.707 [2024-06-10 11:36:05.408595] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4409 *** 00:25:21.707 INFO: Running with entropic power schedule (0xFF, 100). 00:25:21.707 INFO: Seed: 1209017476 00:25:21.707 INFO: Loaded 1 modules (357716 inline 8-bit counters): 357716 [0x29a784c, 0x29feda0), 00:25:21.707 INFO: Loaded 1 PC tables (357716 PCs): 357716 [0x29feda0,0x2f742e0), 00:25:21.707 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 00:25:21.707 INFO: A corpus is not provided, starting from an empty corpus 00:25:21.707 [2024-06-10 11:36:05.453348] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.707 [2024-06-10 11:36:05.453386] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:21.707 #2 INITED cov: 11818 ft: 11819 corp: 1/1b exec/s: 0 rss: 70Mb 00:25:21.707 [2024-06-10 11:36:05.503333] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.707 [2024-06-10 11:36:05.503365] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:21.707 [2024-06-10 11:36:05.503413] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.707 [2024-06-10 11:36:05.503430] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:21.707 #3 NEW cov: 11951 ft: 13075 corp: 2/3b lim: 5 exec/s: 0 rss: 71Mb L: 2/2 MS: 1 InsertByte- 00:25:21.707 [2024-06-10 11:36:05.583546] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.707 [2024-06-10 11:36:05.583577] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:21.707 [2024-06-10 11:36:05.583625] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.707 [2024-06-10 11:36:05.583641] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:21.707 #4 NEW cov: 11957 ft: 13243 corp: 3/5b lim: 5 exec/s: 0 rss: 71Mb L: 2/2 MS: 1 InsertByte- 00:25:21.707 [2024-06-10 11:36:05.633675] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.707 [2024-06-10 11:36:05.633706] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:21.707 [2024-06-10 11:36:05.633755] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.707 [2024-06-10 11:36:05.633776] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:21.966 #5 NEW cov: 12042 ft: 13510 corp: 4/7b lim: 5 exec/s: 0 rss: 71Mb L: 2/2 MS: 1 CrossOver- 00:25:21.966 [2024-06-10 11:36:05.693831] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.966 [2024-06-10 11:36:05.693862] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:21.966 [2024-06-10 11:36:05.693911] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.966 [2024-06-10 11:36:05.693926] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:21.966 #6 NEW cov: 12042 ft: 13678 corp: 5/9b lim: 5 exec/s: 0 rss: 71Mb L: 2/2 MS: 1 InsertByte- 00:25:21.966 [2024-06-10 11:36:05.743894] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.966 [2024-06-10 11:36:05.743924] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:21.966 #7 NEW cov: 12042 ft: 13831 corp: 6/10b lim: 5 exec/s: 0 rss: 71Mb L: 1/2 MS: 1 EraseBytes- 00:25:21.966 [2024-06-10 11:36:05.824228] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.966 [2024-06-10 11:36:05.824267] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:21.966 [2024-06-10 11:36:05.824316] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.966 [2024-06-10 11:36:05.824332] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:21.966 [2024-06-10 11:36:05.824361] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.966 [2024-06-10 11:36:05.824377] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:21.966 #8 NEW cov: 12042 ft: 14100 corp: 7/13b lim: 5 exec/s: 0 rss: 71Mb L: 3/3 MS: 1 CrossOver- 00:25:21.966 [2024-06-10 11:36:05.904444] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.966 [2024-06-10 11:36:05.904475] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:21.966 [2024-06-10 11:36:05.904523] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.966 [2024-06-10 11:36:05.904540] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:21.966 [2024-06-10 11:36:05.904570] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.966 [2024-06-10 11:36:05.904585] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:22.225 #9 NEW cov: 12042 ft: 14163 corp: 8/16b lim: 5 exec/s: 0 rss: 72Mb L: 3/3 MS: 1 InsertByte- 00:25:22.226 [2024-06-10 11:36:05.984534] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.226 [2024-06-10 11:36:05.984564] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:22.226 #10 NEW cov: 12042 ft: 14196 corp: 9/17b lim: 5 exec/s: 0 rss: 72Mb L: 1/3 MS: 1 CrossOver- 00:25:22.226 [2024-06-10 11:36:06.044730] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.226 [2024-06-10 11:36:06.044760] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:22.226 [2024-06-10 11:36:06.044808] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.226 [2024-06-10 11:36:06.044824] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:22.226 #11 NEW cov: 12042 ft: 14292 corp: 10/19b lim: 5 exec/s: 0 rss: 72Mb L: 2/3 MS: 1 ShuffleBytes- 00:25:22.226 [2024-06-10 11:36:06.125021] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.226 [2024-06-10 11:36:06.125055] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:22.226 [2024-06-10 11:36:06.125089] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.226 [2024-06-10 11:36:06.125106] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:22.226 #12 NEW cov: 12042 ft: 14352 corp: 11/21b lim: 5 exec/s: 0 rss: 72Mb L: 2/3 MS: 1 ChangeBinInt- 00:25:22.485 [2024-06-10 11:36:06.185062] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.485 [2024-06-10 11:36:06.185095] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:22.485 #13 NEW cov: 12042 ft: 14405 corp: 12/22b lim: 5 exec/s: 0 rss: 72Mb L: 1/3 MS: 1 ChangeBinInt- 00:25:22.485 [2024-06-10 11:36:06.235222] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.485 [2024-06-10 11:36:06.235262] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:22.485 [2024-06-10 11:36:06.235311] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.485 [2024-06-10 11:36:06.235327] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:22.485 #14 NEW cov: 12042 ft: 14425 corp: 13/24b lim: 5 exec/s: 0 rss: 72Mb L: 2/3 MS: 1 ChangeByte- 00:25:22.485 [2024-06-10 11:36:06.295418] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.485 [2024-06-10 11:36:06.295449] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:22.485 [2024-06-10 11:36:06.295482] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.485 [2024-06-10 11:36:06.295499] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:22.744 NEW_FUNC[1/1]: 0x1a77150 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:601 00:25:22.744 #15 NEW cov: 12065 ft: 14516 corp: 14/26b lim: 5 exec/s: 15 rss: 73Mb L: 2/3 MS: 1 EraseBytes- 00:25:22.744 [2024-06-10 11:36:06.656508] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.744 [2024-06-10 11:36:06.656566] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:22.744 [2024-06-10 11:36:06.656601] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.744 [2024-06-10 11:36:06.656617] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:23.003 #16 NEW cov: 12065 ft: 14552 corp: 15/28b lim: 5 exec/s: 16 rss: 73Mb L: 2/3 MS: 1 ChangeByte- 00:25:23.003 [2024-06-10 11:36:06.736550] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000c cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.003 [2024-06-10 11:36:06.736584] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:23.004 [2024-06-10 11:36:06.736618] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.004 [2024-06-10 11:36:06.736634] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:23.004 #17 NEW cov: 12065 ft: 14589 corp: 16/30b lim: 5 exec/s: 17 rss: 73Mb L: 2/3 MS: 1 ChangeByte- 00:25:23.004 [2024-06-10 11:36:06.816883] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.004 [2024-06-10 11:36:06.816915] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:23.004 [2024-06-10 11:36:06.816949] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.004 [2024-06-10 11:36:06.816966] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:23.004 [2024-06-10 11:36:06.816995] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.004 [2024-06-10 11:36:06.817010] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:23.004 [2024-06-10 11:36:06.817039] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.004 [2024-06-10 11:36:06.817054] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:23.004 #18 NEW cov: 12065 ft: 14881 corp: 17/34b lim: 5 exec/s: 18 rss: 73Mb L: 4/4 MS: 1 CrossOver- 00:25:23.004 [2024-06-10 11:36:06.876890] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.004 [2024-06-10 11:36:06.876921] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:23.004 [2024-06-10 11:36:06.876955] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.004 [2024-06-10 11:36:06.876971] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:23.004 #19 NEW cov: 12065 ft: 14893 corp: 18/36b lim: 5 exec/s: 19 rss: 73Mb L: 2/4 MS: 1 ChangeBinInt- 00:25:23.004 [2024-06-10 11:36:06.927066] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.004 [2024-06-10 11:36:06.927095] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:23.004 [2024-06-10 11:36:06.927148] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.004 [2024-06-10 11:36:06.927164] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:23.004 [2024-06-10 11:36:06.927193] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.004 [2024-06-10 11:36:06.927208] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:23.263 #20 NEW cov: 12065 ft: 14922 corp: 19/39b lim: 5 exec/s: 20 rss: 73Mb L: 3/4 MS: 1 CrossOver- 00:25:23.263 [2024-06-10 11:36:07.007206] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.263 [2024-06-10 11:36:07.007239] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:23.263 [2024-06-10 11:36:07.007297] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.263 [2024-06-10 11:36:07.007314] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:23.263 #21 NEW cov: 12065 ft: 14938 corp: 20/41b lim: 5 exec/s: 21 rss: 73Mb L: 2/4 MS: 1 ChangeBit- 00:25:23.263 [2024-06-10 11:36:07.087402] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.263 [2024-06-10 11:36:07.087432] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:23.263 #22 NEW cov: 12065 ft: 14982 corp: 21/42b lim: 5 exec/s: 22 rss: 73Mb L: 1/4 MS: 1 ShuffleBytes- 00:25:23.263 [2024-06-10 11:36:07.137611] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.263 [2024-06-10 11:36:07.137642] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:23.263 [2024-06-10 11:36:07.137676] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.263 [2024-06-10 11:36:07.137693] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:23.263 #23 NEW cov: 12065 ft: 14997 corp: 22/44b lim: 5 exec/s: 23 rss: 73Mb L: 2/4 MS: 1 EraseBytes- 00:25:23.263 [2024-06-10 11:36:07.197794] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.263 [2024-06-10 11:36:07.197827] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:23.263 [2024-06-10 11:36:07.197861] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.263 [2024-06-10 11:36:07.197878] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:23.522 #24 NEW cov: 12065 ft: 15029 corp: 23/46b lim: 5 exec/s: 24 rss: 73Mb L: 2/4 MS: 1 ChangeBit- 00:25:23.522 [2024-06-10 11:36:07.257847] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.522 [2024-06-10 11:36:07.257879] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:23.522 #25 NEW cov: 12065 ft: 15056 corp: 24/47b lim: 5 exec/s: 25 rss: 73Mb L: 1/4 MS: 1 ChangeByte- 00:25:23.522 [2024-06-10 11:36:07.338074] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.522 [2024-06-10 11:36:07.338107] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:23.522 #26 NEW cov: 12066 ft: 15133 corp: 25/48b lim: 5 exec/s: 26 rss: 73Mb L: 1/4 MS: 1 ShuffleBytes- 00:25:23.522 [2024-06-10 11:36:07.418337] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.522 [2024-06-10 11:36:07.418368] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:23.522 [2024-06-10 11:36:07.418402] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.522 [2024-06-10 11:36:07.418418] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:23.522 #27 NEW cov: 12066 ft: 15150 corp: 26/50b lim: 5 exec/s: 13 rss: 73Mb L: 2/4 MS: 1 ChangeByte- 00:25:23.522 #27 DONE cov: 12066 ft: 15150 corp: 26/50b lim: 5 exec/s: 13 rss: 73Mb 00:25:23.522 Done 27 runs in 2 second(s) 00:25:23.781 11:36:07 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_9.conf /var/tmp/suppress_nvmf_fuzz 00:25:23.781 11:36:07 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:25:23.781 11:36:07 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:25:23.781 11:36:07 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 10 1 0x1 00:25:23.781 11:36:07 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=10 00:25:23.781 11:36:07 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:25:23.781 11:36:07 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:25:23.781 11:36:07 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 00:25:23.781 11:36:07 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_10.conf 00:25:23.781 11:36:07 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:25:23.781 11:36:07 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:25:23.781 11:36:07 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 10 00:25:23.781 11:36:07 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4410 00:25:23.781 11:36:07 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 00:25:23.781 11:36:07 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4410' 00:25:23.781 11:36:07 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4410"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:25:23.781 11:36:07 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:25:23.781 11:36:07 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:25:23.781 11:36:07 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4410' -c /tmp/fuzz_json_10.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 -Z 10 00:25:23.781 [2024-06-10 11:36:07.647121] Starting SPDK v24.09-pre git sha1 a3f6419f1 / DPDK 24.03.0 initialization... 00:25:23.781 [2024-06-10 11:36:07.647202] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3332496 ] 00:25:23.781 EAL: No free 2048 kB hugepages reported on node 1 00:25:24.039 [2024-06-10 11:36:07.971134] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:24.298 [2024-06-10 11:36:08.072359] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:25:24.298 [2024-06-10 11:36:08.132692] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:24.299 [2024-06-10 11:36:08.148942] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4410 *** 00:25:24.299 INFO: Running with entropic power schedule (0xFF, 100). 00:25:24.299 INFO: Seed: 3948009644 00:25:24.299 INFO: Loaded 1 modules (357716 inline 8-bit counters): 357716 [0x29a784c, 0x29feda0), 00:25:24.299 INFO: Loaded 1 PC tables (357716 PCs): 357716 [0x29feda0,0x2f742e0), 00:25:24.299 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 00:25:24.299 INFO: A corpus is not provided, starting from an empty corpus 00:25:24.299 #2 INITED exec/s: 0 rss: 65Mb 00:25:24.299 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:25:24.299 This may also happen if the target rejected all inputs we tried so far 00:25:24.299 [2024-06-10 11:36:08.207633] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:05131313 cdw11:13131313 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.299 [2024-06-10 11:36:08.207663] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:24.866 NEW_FUNC[1/686]: 0x48fcf0 in fuzz_admin_security_receive_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:205 00:25:24.866 NEW_FUNC[2/686]: 0x4bf580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:25:24.866 #5 NEW cov: 11844 ft: 11843 corp: 2/14b lim: 40 exec/s: 0 rss: 72Mb L: 13/13 MS: 3 ChangeByte-ChangeByte-InsertRepeatedBytes- 00:25:24.866 [2024-06-10 11:36:08.550090] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:05131313 cdw11:13131313 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.866 [2024-06-10 11:36:08.550143] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:24.866 #6 NEW cov: 11974 ft: 12486 corp: 3/27b lim: 40 exec/s: 0 rss: 72Mb L: 13/13 MS: 1 CopyPart- 00:25:24.866 [2024-06-10 11:36:08.611384] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:30303030 cdw11:30303030 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.866 [2024-06-10 11:36:08.611410] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:24.866 [2024-06-10 11:36:08.611499] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:30303030 cdw11:30303030 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.866 [2024-06-10 11:36:08.611515] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:24.866 [2024-06-10 11:36:08.611594] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:30303030 cdw11:30303030 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.866 [2024-06-10 11:36:08.611608] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:24.866 [2024-06-10 11:36:08.611695] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:30303030 cdw11:30303030 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.866 [2024-06-10 11:36:08.611713] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:24.866 #11 NEW cov: 11980 ft: 13447 corp: 4/61b lim: 40 exec/s: 0 rss: 72Mb L: 34/34 MS: 5 ShuffleBytes-ChangeBit-CopyPart-ChangeBinInt-InsertRepeatedBytes- 00:25:24.866 [2024-06-10 11:36:08.660769] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:05131313 cdw11:13131313 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.866 [2024-06-10 11:36:08.660797] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:24.866 #12 NEW cov: 12065 ft: 13688 corp: 5/74b lim: 40 exec/s: 0 rss: 72Mb L: 13/34 MS: 1 CrossOver- 00:25:24.866 [2024-06-10 11:36:08.711682] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0513fefe cdw11:fefefefe SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.866 [2024-06-10 11:36:08.711710] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:24.866 [2024-06-10 11:36:08.711797] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:fefefefe cdw11:fefefefe SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.866 [2024-06-10 11:36:08.711813] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:24.866 [2024-06-10 11:36:08.711910] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:fefe1313 cdw11:13131313 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.866 [2024-06-10 11:36:08.711926] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:24.866 #13 NEW cov: 12065 ft: 13982 corp: 6/103b lim: 40 exec/s: 0 rss: 72Mb L: 29/34 MS: 1 InsertRepeatedBytes- 00:25:24.866 [2024-06-10 11:36:08.772547] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:05131313 cdw11:13131313 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.866 [2024-06-10 11:36:08.772573] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:24.866 [2024-06-10 11:36:08.772669] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:13000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.866 [2024-06-10 11:36:08.772687] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:24.866 [2024-06-10 11:36:08.772769] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.866 [2024-06-10 11:36:08.772793] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:24.866 [2024-06-10 11:36:08.772885] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000013 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.866 [2024-06-10 11:36:08.772903] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:24.866 #14 NEW cov: 12065 ft: 14107 corp: 7/138b lim: 40 exec/s: 0 rss: 72Mb L: 35/35 MS: 1 InsertRepeatedBytes- 00:25:25.125 [2024-06-10 11:36:08.822064] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0513ede5 cdw11:13131313 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.126 [2024-06-10 11:36:08.822094] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:25.126 #15 NEW cov: 12065 ft: 14173 corp: 8/151b lim: 40 exec/s: 0 rss: 72Mb L: 13/35 MS: 1 ChangeBinInt- 00:25:25.126 [2024-06-10 11:36:08.883116] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:30303030 cdw11:30303030 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.126 [2024-06-10 11:36:08.883141] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:25.126 [2024-06-10 11:36:08.883233] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:30303030 cdw11:30303030 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.126 [2024-06-10 11:36:08.883258] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:25.126 [2024-06-10 11:36:08.883335] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:30303030 cdw11:30053030 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.126 [2024-06-10 11:36:08.883353] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:25.126 #16 NEW cov: 12065 ft: 14194 corp: 9/175b lim: 40 exec/s: 0 rss: 72Mb L: 24/35 MS: 1 CrossOver- 00:25:25.126 [2024-06-10 11:36:08.943633] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:30303030 cdw11:30303030 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.126 [2024-06-10 11:36:08.943658] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:25.126 [2024-06-10 11:36:08.943746] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:30303030 cdw11:30303030 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.126 [2024-06-10 11:36:08.943763] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:25.126 [2024-06-10 11:36:08.943855] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:30303030 cdw11:30053030 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.126 [2024-06-10 11:36:08.943870] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:25.126 #17 NEW cov: 12065 ft: 14237 corp: 10/199b lim: 40 exec/s: 0 rss: 72Mb L: 24/35 MS: 1 ShuffleBytes- 00:25:25.126 [2024-06-10 11:36:09.004378] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:30303030 cdw11:30303030 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.126 [2024-06-10 11:36:09.004403] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:25.126 [2024-06-10 11:36:09.004500] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:30303030 cdw11:30303030 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.126 [2024-06-10 11:36:09.004516] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:25.126 [2024-06-10 11:36:09.004601] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:30303030 cdw11:30303130 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.126 [2024-06-10 11:36:09.004615] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:25.126 [2024-06-10 11:36:09.004703] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:30303030 cdw11:30303030 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.126 [2024-06-10 11:36:09.004718] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:25.126 #18 NEW cov: 12065 ft: 14275 corp: 11/233b lim: 40 exec/s: 0 rss: 72Mb L: 34/35 MS: 1 ChangeByte- 00:25:25.126 [2024-06-10 11:36:09.053755] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:05131313 cdw11:13131313 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.126 [2024-06-10 11:36:09.053781] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:25.126 #19 NEW cov: 12065 ft: 14317 corp: 12/245b lim: 40 exec/s: 0 rss: 72Mb L: 12/35 MS: 1 EraseBytes- 00:25:25.385 [2024-06-10 11:36:09.104340] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a0a2805 cdw11:13131313 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.385 [2024-06-10 11:36:09.104365] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:25.385 [2024-06-10 11:36:09.104453] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:13131313 cdw11:13131313 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.385 [2024-06-10 11:36:09.104471] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:25.385 NEW_FUNC[1/1]: 0x1a77150 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:601 00:25:25.385 #24 NEW cov: 12088 ft: 14492 corp: 13/261b lim: 40 exec/s: 0 rss: 73Mb L: 16/35 MS: 5 CMP-ChangeByte-CopyPart-ChangeBit-CrossOver- DE: "\377\013"- 00:25:25.385 [2024-06-10 11:36:09.154746] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0513fefe cdw11:fefefefe SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.385 [2024-06-10 11:36:09.154770] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:25.385 [2024-06-10 11:36:09.154851] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:fefefefe cdw11:fefefefe SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.385 [2024-06-10 11:36:09.154867] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:25.385 [2024-06-10 11:36:09.154954] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:fefe1313 cdw11:13131313 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.385 [2024-06-10 11:36:09.154969] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:25.385 #30 NEW cov: 12088 ft: 14500 corp: 14/290b lim: 40 exec/s: 0 rss: 73Mb L: 29/35 MS: 1 ChangeBinInt- 00:25:25.385 [2024-06-10 11:36:09.215476] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:30303030 cdw11:30303030 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.385 [2024-06-10 11:36:09.215501] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:25.385 [2024-06-10 11:36:09.215593] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:30303030 cdw11:3030db30 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.385 [2024-06-10 11:36:09.215609] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:25.385 [2024-06-10 11:36:09.215700] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:30303030 cdw11:30303130 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.385 [2024-06-10 11:36:09.215717] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:25.385 [2024-06-10 11:36:09.215806] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:30303030 cdw11:30303030 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.385 [2024-06-10 11:36:09.215822] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:25.385 #31 NEW cov: 12088 ft: 14522 corp: 15/324b lim: 40 exec/s: 31 rss: 73Mb L: 34/35 MS: 1 ChangeByte- 00:25:25.385 [2024-06-10 11:36:09.275889] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:30303030 cdw11:30303030 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.385 [2024-06-10 11:36:09.275915] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:25.385 [2024-06-10 11:36:09.276007] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:30303030 cdw11:3030db30 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.385 [2024-06-10 11:36:09.276025] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:25.385 [2024-06-10 11:36:09.276112] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:30303030 cdw11:30303130 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.385 [2024-06-10 11:36:09.276127] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:25.385 [2024-06-10 11:36:09.276207] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:30303030 cdw11:30303028 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.385 [2024-06-10 11:36:09.276227] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:25.385 #32 NEW cov: 12088 ft: 14535 corp: 16/358b lim: 40 exec/s: 32 rss: 73Mb L: 34/35 MS: 1 ChangeBinInt- 00:25:25.644 [2024-06-10 11:36:09.336264] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:30303030 cdw11:30303030 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.644 [2024-06-10 11:36:09.336291] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:25.644 [2024-06-10 11:36:09.336379] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:30303030 cdw11:30303030 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.644 [2024-06-10 11:36:09.336395] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:25.644 [2024-06-10 11:36:09.336484] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:13131313 cdw11:13303030 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.644 [2024-06-10 11:36:09.336500] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:25.644 [2024-06-10 11:36:09.336599] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:30303030 cdw11:30303030 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.644 [2024-06-10 11:36:09.336616] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:25.644 #33 NEW cov: 12088 ft: 14551 corp: 17/397b lim: 40 exec/s: 33 rss: 73Mb L: 39/39 MS: 1 CrossOver- 00:25:25.644 [2024-06-10 11:36:09.386314] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:30303030 cdw11:30303030 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.644 [2024-06-10 11:36:09.386339] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:25.644 [2024-06-10 11:36:09.386426] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:30303030 cdw11:3030db30 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.644 [2024-06-10 11:36:09.386442] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:25.644 [2024-06-10 11:36:09.386535] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:30303030 cdw11:30303130 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.644 [2024-06-10 11:36:09.386551] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:25.644 [2024-06-10 11:36:09.386642] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:30303030 cdw11:30303030 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.644 [2024-06-10 11:36:09.386657] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:25.644 #34 NEW cov: 12088 ft: 14592 corp: 18/431b lim: 40 exec/s: 34 rss: 73Mb L: 34/39 MS: 1 CopyPart- 00:25:25.644 [2024-06-10 11:36:09.435871] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:05131313 cdw11:13131313 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.644 [2024-06-10 11:36:09.435899] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:25.644 #35 NEW cov: 12088 ft: 14606 corp: 19/444b lim: 40 exec/s: 35 rss: 73Mb L: 13/39 MS: 1 CrossOver- 00:25:25.644 [2024-06-10 11:36:09.486955] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:05131313 cdw11:13131313 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.644 [2024-06-10 11:36:09.486981] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:25.644 [2024-06-10 11:36:09.487086] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.644 [2024-06-10 11:36:09.487102] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:25.644 [2024-06-10 11:36:09.487192] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:13131313 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.644 [2024-06-10 11:36:09.487207] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:25.644 #36 NEW cov: 12088 ft: 14636 corp: 20/468b lim: 40 exec/s: 36 rss: 73Mb L: 24/39 MS: 1 InsertRepeatedBytes- 00:25:25.644 [2024-06-10 11:36:09.547814] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:30303030 cdw11:30303030 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.644 [2024-06-10 11:36:09.547840] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:25.644 [2024-06-10 11:36:09.547937] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:30303030 cdw11:30303030 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.644 [2024-06-10 11:36:09.547953] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:25.644 [2024-06-10 11:36:09.548036] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:13131313 cdw11:13303030 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.644 [2024-06-10 11:36:09.548051] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:25.644 [2024-06-10 11:36:09.548136] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:7e303030 cdw11:30303030 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.644 [2024-06-10 11:36:09.548151] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:25.644 [2024-06-10 11:36:09.548248] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:8 nsid:0 cdw10:30303030 cdw11:303030e9 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.645 [2024-06-10 11:36:09.548278] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:25.645 #37 NEW cov: 12088 ft: 14687 corp: 21/508b lim: 40 exec/s: 37 rss: 73Mb L: 40/40 MS: 1 InsertByte- 00:25:25.903 [2024-06-10 11:36:09.607608] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:30303030 cdw11:30303030 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.903 [2024-06-10 11:36:09.607636] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:25.903 [2024-06-10 11:36:09.607734] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:30303030 cdw11:30303030 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.903 [2024-06-10 11:36:09.607752] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:25.903 [2024-06-10 11:36:09.607840] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:30303030 cdw11:30303130 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.903 [2024-06-10 11:36:09.607857] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:25.903 [2024-06-10 11:36:09.607946] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:30302030 cdw11:30303030 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.903 [2024-06-10 11:36:09.607962] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:25.903 #38 NEW cov: 12088 ft: 14770 corp: 22/542b lim: 40 exec/s: 38 rss: 73Mb L: 34/40 MS: 1 ChangeBit- 00:25:25.903 [2024-06-10 11:36:09.657257] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a0a2805 cdw11:13131313 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.904 [2024-06-10 11:36:09.657282] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:25.904 [2024-06-10 11:36:09.657384] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:13131313 cdw11:13131313 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.904 [2024-06-10 11:36:09.657400] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:25.904 #39 NEW cov: 12088 ft: 14803 corp: 23/558b lim: 40 exec/s: 39 rss: 73Mb L: 16/40 MS: 1 ShuffleBytes- 00:25:25.904 [2024-06-10 11:36:09.717464] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:05000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.904 [2024-06-10 11:36:09.717489] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:25.904 [2024-06-10 11:36:09.717581] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00131313 cdw11:13131313 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.904 [2024-06-10 11:36:09.717596] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:25.904 #40 NEW cov: 12088 ft: 14807 corp: 24/579b lim: 40 exec/s: 40 rss: 73Mb L: 21/40 MS: 1 CMP- DE: "\000\000\000\000\000\000\000\000"- 00:25:25.904 [2024-06-10 11:36:09.778097] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0513fefe cdw11:fefefefe SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.904 [2024-06-10 11:36:09.778122] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:25.904 [2024-06-10 11:36:09.778216] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:fefefefe cdw11:fefefefe SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.904 [2024-06-10 11:36:09.778232] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:25.904 [2024-06-10 11:36:09.778335] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:fefe1313 cdw11:13131313 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.904 [2024-06-10 11:36:09.778352] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:25.904 #41 NEW cov: 12088 ft: 14818 corp: 25/608b lim: 40 exec/s: 41 rss: 73Mb L: 29/40 MS: 1 CopyPart- 00:25:25.904 [2024-06-10 11:36:09.839088] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:05133030 cdw11:30303030 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.904 [2024-06-10 11:36:09.839114] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:25.904 [2024-06-10 11:36:09.839206] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:30303030 cdw11:30301313 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.904 [2024-06-10 11:36:09.839223] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:25.904 [2024-06-10 11:36:09.839325] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:13131330 cdw11:30307e30 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.904 [2024-06-10 11:36:09.839340] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:25.904 [2024-06-10 11:36:09.839436] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:30303030 cdw11:30301313 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.904 [2024-06-10 11:36:09.839454] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:25.904 [2024-06-10 11:36:09.839551] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:8 nsid:0 cdw10:13131313 cdw11:13131313 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.904 [2024-06-10 11:36:09.839568] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:26.162 #42 NEW cov: 12088 ft: 14836 corp: 26/648b lim: 40 exec/s: 42 rss: 73Mb L: 40/40 MS: 1 CrossOver- 00:25:26.162 [2024-06-10 11:36:09.889001] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:30323030 cdw11:30303030 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.162 [2024-06-10 11:36:09.889026] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:26.162 [2024-06-10 11:36:09.889110] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:30303030 cdw11:3030db30 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.162 [2024-06-10 11:36:09.889125] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:26.162 [2024-06-10 11:36:09.889219] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:30303030 cdw11:30303130 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.162 [2024-06-10 11:36:09.889235] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:26.162 [2024-06-10 11:36:09.889342] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:30303030 cdw11:30303030 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.162 [2024-06-10 11:36:09.889359] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:26.162 #43 NEW cov: 12088 ft: 14853 corp: 27/682b lim: 40 exec/s: 43 rss: 73Mb L: 34/40 MS: 1 ChangeBinInt- 00:25:26.162 [2024-06-10 11:36:09.939931] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:30303030 cdw11:30303030 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.162 [2024-06-10 11:36:09.939955] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:26.162 [2024-06-10 11:36:09.940055] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:30303030 cdw11:30303030 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.162 [2024-06-10 11:36:09.940070] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:26.162 [2024-06-10 11:36:09.940156] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:13131313 cdw11:13303030 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.162 [2024-06-10 11:36:09.940171] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:26.162 [2024-06-10 11:36:09.940267] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:7b303030 cdw11:30303030 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.162 [2024-06-10 11:36:09.940283] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:26.162 [2024-06-10 11:36:09.940371] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:8 nsid:0 cdw10:30303030 cdw11:303030e9 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.162 [2024-06-10 11:36:09.940386] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:26.162 #44 NEW cov: 12088 ft: 14866 corp: 28/722b lim: 40 exec/s: 44 rss: 74Mb L: 40/40 MS: 1 ChangeBinInt- 00:25:26.162 [2024-06-10 11:36:10.009266] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:30303030 cdw11:30303130 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.162 [2024-06-10 11:36:10.009296] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:26.162 [2024-06-10 11:36:10.009396] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:30303030 cdw11:30303030 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.162 [2024-06-10 11:36:10.009413] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:26.162 #45 NEW cov: 12088 ft: 14885 corp: 29/740b lim: 40 exec/s: 45 rss: 74Mb L: 18/40 MS: 1 EraseBytes- 00:25:26.162 [2024-06-10 11:36:10.059652] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:05131313 cdw11:13131313 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.162 [2024-06-10 11:36:10.059685] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:26.162 [2024-06-10 11:36:10.059778] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:13131313 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.162 [2024-06-10 11:36:10.059796] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:26.162 [2024-06-10 11:36:10.059885] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.162 [2024-06-10 11:36:10.059901] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:26.163 #46 NEW cov: 12088 ft: 14910 corp: 30/766b lim: 40 exec/s: 46 rss: 74Mb L: 26/40 MS: 1 InsertRepeatedBytes- 00:25:26.163 [2024-06-10 11:36:10.110109] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:30303030 cdw11:30303130 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.163 [2024-06-10 11:36:10.110139] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:26.163 [2024-06-10 11:36:10.110224] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:0a0a2805 cdw11:13131313 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.163 [2024-06-10 11:36:10.110241] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:26.163 [2024-06-10 11:36:10.110337] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:13131313 cdw11:13131330 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.163 [2024-06-10 11:36:10.110353] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:26.163 [2024-06-10 11:36:10.110446] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:30303030 cdw11:30303030 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.163 [2024-06-10 11:36:10.110463] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:26.420 #47 NEW cov: 12088 ft: 14944 corp: 31/799b lim: 40 exec/s: 47 rss: 74Mb L: 33/40 MS: 1 CrossOver- 00:25:26.420 [2024-06-10 11:36:10.180383] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:30303030 cdw11:30303030 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.421 [2024-06-10 11:36:10.180418] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:26.421 [2024-06-10 11:36:10.180514] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:30303030 cdw11:30303030 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.421 [2024-06-10 11:36:10.180533] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:26.421 [2024-06-10 11:36:10.180626] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:30303030 cdw11:30303130 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.421 [2024-06-10 11:36:10.180645] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:26.421 [2024-06-10 11:36:10.180730] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:30ffffff cdw11:30303030 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.421 [2024-06-10 11:36:10.180746] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:26.421 #48 NEW cov: 12088 ft: 14961 corp: 32/836b lim: 40 exec/s: 24 rss: 74Mb L: 37/40 MS: 1 InsertRepeatedBytes- 00:25:26.421 #48 DONE cov: 12088 ft: 14961 corp: 32/836b lim: 40 exec/s: 24 rss: 74Mb 00:25:26.421 ###### Recommended dictionary. ###### 00:25:26.421 "\377\013" # Uses: 0 00:25:26.421 "\000\000\000\000\000\000\000\000" # Uses: 0 00:25:26.421 ###### End of recommended dictionary. ###### 00:25:26.421 Done 48 runs in 2 second(s) 00:25:26.421 11:36:10 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_10.conf /var/tmp/suppress_nvmf_fuzz 00:25:26.421 11:36:10 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:25:26.421 11:36:10 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:25:26.421 11:36:10 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 11 1 0x1 00:25:26.421 11:36:10 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=11 00:25:26.421 11:36:10 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:25:26.421 11:36:10 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:25:26.421 11:36:10 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 00:25:26.421 11:36:10 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_11.conf 00:25:26.421 11:36:10 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:25:26.421 11:36:10 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:25:26.421 11:36:10 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 11 00:25:26.421 11:36:10 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4411 00:25:26.421 11:36:10 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 00:25:26.421 11:36:10 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4411' 00:25:26.421 11:36:10 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4411"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:25:26.421 11:36:10 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:25:26.421 11:36:10 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:25:26.421 11:36:10 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4411' -c /tmp/fuzz_json_11.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 -Z 11 00:25:26.679 [2024-06-10 11:36:10.384164] Starting SPDK v24.09-pre git sha1 a3f6419f1 / DPDK 24.03.0 initialization... 00:25:26.679 [2024-06-10 11:36:10.384254] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3332871 ] 00:25:26.679 EAL: No free 2048 kB hugepages reported on node 1 00:25:26.937 [2024-06-10 11:36:10.714617] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:26.937 [2024-06-10 11:36:10.807243] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:25:26.937 [2024-06-10 11:36:10.867449] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:26.937 [2024-06-10 11:36:10.883693] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4411 *** 00:25:27.194 INFO: Running with entropic power schedule (0xFF, 100). 00:25:27.194 INFO: Seed: 2389062556 00:25:27.194 INFO: Loaded 1 modules (357716 inline 8-bit counters): 357716 [0x29a784c, 0x29feda0), 00:25:27.194 INFO: Loaded 1 PC tables (357716 PCs): 357716 [0x29feda0,0x2f742e0), 00:25:27.194 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 00:25:27.194 INFO: A corpus is not provided, starting from an empty corpus 00:25:27.194 #2 INITED exec/s: 0 rss: 65Mb 00:25:27.194 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:25:27.194 This may also happen if the target rejected all inputs we tried so far 00:25:27.194 [2024-06-10 11:36:10.939025] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:2e0a4900 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.194 [2024-06-10 11:36:10.939053] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:27.452 NEW_FUNC[1/687]: 0x491a60 in fuzz_admin_security_send_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:223 00:25:27.452 NEW_FUNC[2/687]: 0x4bf580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:25:27.452 #20 NEW cov: 11854 ft: 11856 corp: 2/12b lim: 40 exec/s: 0 rss: 72Mb L: 11/11 MS: 3 InsertByte-CrossOver-CMP- DE: "I\000\000\000\000\000\000\000"- 00:25:27.452 [2024-06-10 11:36:11.280106] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:d0d0d0d0 cdw11:d0d0d0d0 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.452 [2024-06-10 11:36:11.280147] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:27.452 [2024-06-10 11:36:11.280207] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:d0d0d0d0 cdw11:d0d0d057 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.452 [2024-06-10 11:36:11.280221] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:27.452 #23 NEW cov: 11986 ft: 13194 corp: 3/29b lim: 40 exec/s: 0 rss: 72Mb L: 17/17 MS: 3 ChangeByte-CrossOver-InsertRepeatedBytes- 00:25:27.452 [2024-06-10 11:36:11.320057] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:d0d0d0d0 cdw11:d0d0d0d0 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.452 [2024-06-10 11:36:11.320086] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:27.452 [2024-06-10 11:36:11.320144] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:d0d0d0d0 cdw11:d0d0d057 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.452 [2024-06-10 11:36:11.320160] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:27.452 #24 NEW cov: 11992 ft: 13450 corp: 4/46b lim: 40 exec/s: 0 rss: 73Mb L: 17/17 MS: 1 ShuffleBytes- 00:25:27.452 [2024-06-10 11:36:11.370193] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:d0d0d0d0 cdw11:d0d0d0d0 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.452 [2024-06-10 11:36:11.370221] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:27.452 [2024-06-10 11:36:11.370287] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:d0d0d0d0 cdw11:d0d0d057 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.452 [2024-06-10 11:36:11.370302] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:27.452 #25 NEW cov: 12077 ft: 13715 corp: 5/63b lim: 40 exec/s: 0 rss: 73Mb L: 17/17 MS: 1 ShuffleBytes- 00:25:27.711 [2024-06-10 11:36:11.410210] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:d0d0d0d0 cdw11:d0d0d0d0 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.711 [2024-06-10 11:36:11.410235] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:27.711 #26 NEW cov: 12077 ft: 13829 corp: 6/73b lim: 40 exec/s: 0 rss: 73Mb L: 10/17 MS: 1 EraseBytes- 00:25:27.711 [2024-06-10 11:36:11.460490] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:d0d0d0d0 cdw11:d0d0d0d0 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.711 [2024-06-10 11:36:11.460517] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:27.711 [2024-06-10 11:36:11.460577] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:d0d057d0 cdw11:d0d0d057 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.711 [2024-06-10 11:36:11.460591] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:27.711 #27 NEW cov: 12077 ft: 13877 corp: 7/90b lim: 40 exec/s: 0 rss: 73Mb L: 17/17 MS: 1 CrossOver- 00:25:27.711 [2024-06-10 11:36:11.500579] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:d0d0d0d0 cdw11:d0ffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.711 [2024-06-10 11:36:11.500607] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:27.711 [2024-06-10 11:36:11.500666] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffd0d0d0 cdw11:d0d0d0d0 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.711 [2024-06-10 11:36:11.500682] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:27.711 #28 NEW cov: 12077 ft: 14002 corp: 8/111b lim: 40 exec/s: 0 rss: 73Mb L: 21/21 MS: 1 CMP- DE: "\377\377\377\377"- 00:25:27.711 [2024-06-10 11:36:11.550833] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:f1f1f1f1 cdw11:f1f1f1f1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.711 [2024-06-10 11:36:11.550861] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:27.711 [2024-06-10 11:36:11.550921] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:f1f1f1f1 cdw11:f1f1f1f1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.711 [2024-06-10 11:36:11.550935] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:27.711 [2024-06-10 11:36:11.551004] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:f1f10000 cdw11:0000c000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.711 [2024-06-10 11:36:11.551018] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:27.711 #31 NEW cov: 12077 ft: 14272 corp: 9/136b lim: 40 exec/s: 0 rss: 73Mb L: 25/25 MS: 3 EraseBytes-InsertByte-InsertRepeatedBytes- 00:25:27.711 [2024-06-10 11:36:11.601119] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:d0d0d0d0 cdw11:d0d0d0d0 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.711 [2024-06-10 11:36:11.601147] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:27.711 [2024-06-10 11:36:11.601208] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:d0d057d0 cdw11:d09f9f9f SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.711 [2024-06-10 11:36:11.601222] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:27.711 [2024-06-10 11:36:11.601278] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:9f9f9f9f cdw11:9f9f9f9f SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.711 [2024-06-10 11:36:11.601309] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:27.711 [2024-06-10 11:36:11.601365] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:9f9f9f9f cdw11:9f9f9f9f SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.711 [2024-06-10 11:36:11.601382] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:27.711 #32 NEW cov: 12077 ft: 14600 corp: 10/174b lim: 40 exec/s: 0 rss: 73Mb L: 38/38 MS: 1 InsertRepeatedBytes- 00:25:27.711 [2024-06-10 11:36:11.650945] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:d0d028d0 cdw11:d0ffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.711 [2024-06-10 11:36:11.650971] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:27.711 [2024-06-10 11:36:11.651029] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffd0d0d0 cdw11:d0d0d0d0 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.711 [2024-06-10 11:36:11.651043] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:27.969 #33 NEW cov: 12077 ft: 14655 corp: 11/195b lim: 40 exec/s: 0 rss: 73Mb L: 21/38 MS: 1 ChangeByte- 00:25:27.969 [2024-06-10 11:36:11.700935] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:2bd0d0d0 cdw11:d0d0d0d0 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.969 [2024-06-10 11:36:11.700960] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:27.969 #34 NEW cov: 12077 ft: 14702 corp: 12/206b lim: 40 exec/s: 0 rss: 73Mb L: 11/38 MS: 1 InsertByte- 00:25:27.969 [2024-06-10 11:36:11.751705] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:d0d0d0d0 cdw11:d0d0d0d0 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.969 [2024-06-10 11:36:11.751729] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:27.969 [2024-06-10 11:36:11.751786] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:d0d057d0 cdw11:9f9fd09f SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.969 [2024-06-10 11:36:11.751801] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:27.969 [2024-06-10 11:36:11.751854] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:9f9f9f9f cdw11:9f9f9f9f SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.969 [2024-06-10 11:36:11.751867] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:27.969 [2024-06-10 11:36:11.751921] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:9f9f9f9f cdw11:9f9f9f9f SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.969 [2024-06-10 11:36:11.751934] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:27.969 [2024-06-10 11:36:11.751991] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:8 nsid:0 cdw10:9f9f9f9f cdw11:d0d0570a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.969 [2024-06-10 11:36:11.752004] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:27.969 #35 NEW cov: 12077 ft: 14782 corp: 13/246b lim: 40 exec/s: 0 rss: 73Mb L: 40/40 MS: 1 CopyPart- 00:25:27.969 [2024-06-10 11:36:11.801230] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a000049 cdw11:002e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.969 [2024-06-10 11:36:11.801258] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:27.969 NEW_FUNC[1/1]: 0x1a77150 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:601 00:25:27.969 #36 NEW cov: 12100 ft: 14822 corp: 14/257b lim: 40 exec/s: 0 rss: 74Mb L: 11/40 MS: 1 ShuffleBytes- 00:25:27.969 [2024-06-10 11:36:11.841368] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:2e0abdff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.969 [2024-06-10 11:36:11.841396] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:27.969 #37 NEW cov: 12100 ft: 14898 corp: 15/268b lim: 40 exec/s: 0 rss: 74Mb L: 11/40 MS: 1 ChangeBinInt- 00:25:27.969 [2024-06-10 11:36:11.882058] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:49000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.969 [2024-06-10 11:36:11.882083] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:27.969 [2024-06-10 11:36:11.882159] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:d0d057d0 cdw11:9f9fd09f SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.969 [2024-06-10 11:36:11.882174] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:27.969 [2024-06-10 11:36:11.882229] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:9f9f9f9f cdw11:9f9f9f9f SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.969 [2024-06-10 11:36:11.882242] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:27.969 [2024-06-10 11:36:11.882305] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:9f9f9f9f cdw11:9f9f9f9f SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.969 [2024-06-10 11:36:11.882329] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:27.969 [2024-06-10 11:36:11.882385] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:8 nsid:0 cdw10:9f9f9f9f cdw11:d0d0570a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.969 [2024-06-10 11:36:11.882398] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:27.969 #38 NEW cov: 12100 ft: 14948 corp: 16/308b lim: 40 exec/s: 38 rss: 74Mb L: 40/40 MS: 1 PersAutoDict- DE: "I\000\000\000\000\000\000\000"- 00:25:28.228 [2024-06-10 11:36:11.932071] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:d0d0d0d0 cdw11:d0d0d0d0 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.228 [2024-06-10 11:36:11.932097] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:28.228 [2024-06-10 11:36:11.932154] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:d0d057a2 cdw11:d09f9f9f SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.228 [2024-06-10 11:36:11.932168] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:28.228 [2024-06-10 11:36:11.932222] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:9f9f9f9f cdw11:9f9f9f9f SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.228 [2024-06-10 11:36:11.932235] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:28.228 [2024-06-10 11:36:11.932309] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:9f9f9f9f cdw11:9f9f9f9f SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.228 [2024-06-10 11:36:11.932323] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:28.228 #39 NEW cov: 12100 ft: 14976 corp: 17/346b lim: 40 exec/s: 39 rss: 74Mb L: 38/40 MS: 1 ChangeByte- 00:25:28.228 [2024-06-10 11:36:11.971700] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:2effffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.228 [2024-06-10 11:36:11.971725] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:28.228 #40 NEW cov: 12100 ft: 14992 corp: 18/357b lim: 40 exec/s: 40 rss: 74Mb L: 11/40 MS: 1 PersAutoDict- DE: "\377\377\377\377"- 00:25:28.228 [2024-06-10 11:36:12.021857] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.228 [2024-06-10 11:36:12.021882] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:28.228 #41 NEW cov: 12100 ft: 15035 corp: 19/368b lim: 40 exec/s: 41 rss: 74Mb L: 11/40 MS: 1 CMP- DE: "\377\377\377\377\377\377\377\377"- 00:25:28.228 [2024-06-10 11:36:12.061932] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:2bd0d0d0 cdw11:d0d0d0d0 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.228 [2024-06-10 11:36:12.061956] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:28.228 #42 NEW cov: 12100 ft: 15051 corp: 20/379b lim: 40 exec/s: 42 rss: 74Mb L: 11/40 MS: 1 ShuffleBytes- 00:25:28.228 [2024-06-10 11:36:12.112696] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:49000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.228 [2024-06-10 11:36:12.112721] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:28.228 [2024-06-10 11:36:12.112780] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:d0d057d0 cdw11:9f9fd09f SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.228 [2024-06-10 11:36:12.112794] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:28.228 [2024-06-10 11:36:12.112851] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:9f9f9f9f cdw11:9f9f9f9f SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.228 [2024-06-10 11:36:12.112865] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:28.228 [2024-06-10 11:36:12.112920] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:9f9f9f9f cdw11:9f9f9f9f SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.228 [2024-06-10 11:36:12.112934] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:28.228 [2024-06-10 11:36:12.112988] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:8 nsid:0 cdw10:9f9f9f9f cdw11:d0d057d0 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.228 [2024-06-10 11:36:12.113001] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:28.228 #43 NEW cov: 12100 ft: 15124 corp: 21/419b lim: 40 exec/s: 43 rss: 74Mb L: 40/40 MS: 1 CopyPart- 00:25:28.228 [2024-06-10 11:36:12.162359] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:d0d0d0d4 cdw11:d0d0d0d0 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.228 [2024-06-10 11:36:12.162385] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:28.228 [2024-06-10 11:36:12.162458] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:d0d0d0d0 cdw11:d0d0d057 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.228 [2024-06-10 11:36:12.162472] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:28.485 #44 NEW cov: 12100 ft: 15158 corp: 22/436b lim: 40 exec/s: 44 rss: 74Mb L: 17/40 MS: 1 ChangeBit- 00:25:28.486 [2024-06-10 11:36:12.202638] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:49000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.486 [2024-06-10 11:36:12.202663] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:28.486 [2024-06-10 11:36:12.202736] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:d0d057d0 cdw11:9f9fd09f SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.486 [2024-06-10 11:36:12.202753] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:28.486 [2024-06-10 11:36:12.202815] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:9f9f9f9f cdw11:9f9f9f9f SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.486 [2024-06-10 11:36:12.202828] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:28.486 #45 NEW cov: 12100 ft: 15165 corp: 23/467b lim: 40 exec/s: 45 rss: 74Mb L: 31/40 MS: 1 EraseBytes- 00:25:28.486 [2024-06-10 11:36:12.242592] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:2e0a49d0 cdw11:d0d0d0d0 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.486 [2024-06-10 11:36:12.242617] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:28.486 [2024-06-10 11:36:12.242693] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:d0d057d0 cdw11:d0d0d000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.486 [2024-06-10 11:36:12.242708] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:28.486 #46 NEW cov: 12100 ft: 15172 corp: 24/490b lim: 40 exec/s: 46 rss: 74Mb L: 23/40 MS: 1 CrossOver- 00:25:28.486 [2024-06-10 11:36:12.282694] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:2effffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.486 [2024-06-10 11:36:12.282718] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:28.486 [2024-06-10 11:36:12.282778] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.486 [2024-06-10 11:36:12.282792] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:28.486 #47 NEW cov: 12100 ft: 15184 corp: 25/509b lim: 40 exec/s: 47 rss: 74Mb L: 19/40 MS: 1 PersAutoDict- DE: "\377\377\377\377\377\377\377\377"- 00:25:28.486 [2024-06-10 11:36:12.332895] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:2effffff cdw11:fffeffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.486 [2024-06-10 11:36:12.332920] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:28.486 [2024-06-10 11:36:12.332981] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.486 [2024-06-10 11:36:12.332995] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:28.486 #48 NEW cov: 12100 ft: 15197 corp: 26/528b lim: 40 exec/s: 48 rss: 74Mb L: 19/40 MS: 1 ChangeBit- 00:25:28.486 [2024-06-10 11:36:12.383018] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:2effffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.486 [2024-06-10 11:36:12.383043] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:28.486 [2024-06-10 11:36:12.383101] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.486 [2024-06-10 11:36:12.383114] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:28.486 #49 NEW cov: 12100 ft: 15203 corp: 27/547b lim: 40 exec/s: 49 rss: 74Mb L: 19/40 MS: 1 ShuffleBytes- 00:25:28.486 [2024-06-10 11:36:12.423001] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:d0d0d0d0 cdw11:d0d0d0d0 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.486 [2024-06-10 11:36:12.423026] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:28.744 #50 NEW cov: 12100 ft: 15207 corp: 28/562b lim: 40 exec/s: 50 rss: 74Mb L: 15/40 MS: 1 EraseBytes- 00:25:28.744 [2024-06-10 11:36:12.463288] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:2effffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.744 [2024-06-10 11:36:12.463313] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:28.744 [2024-06-10 11:36:12.463391] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:0affffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.744 [2024-06-10 11:36:12.463405] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:28.744 #51 NEW cov: 12100 ft: 15219 corp: 29/579b lim: 40 exec/s: 51 rss: 74Mb L: 17/40 MS: 1 CopyPart- 00:25:28.744 [2024-06-10 11:36:12.503234] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:d0d0d0d0 cdw11:d0d0d0d0 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.744 [2024-06-10 11:36:12.503264] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:28.744 #52 NEW cov: 12100 ft: 15235 corp: 30/594b lim: 40 exec/s: 52 rss: 74Mb L: 15/40 MS: 1 EraseBytes- 00:25:28.744 [2024-06-10 11:36:12.543498] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:2effffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.744 [2024-06-10 11:36:12.543523] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:28.744 [2024-06-10 11:36:12.543583] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.744 [2024-06-10 11:36:12.543598] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:28.744 #53 NEW cov: 12100 ft: 15252 corp: 31/613b lim: 40 exec/s: 53 rss: 74Mb L: 19/40 MS: 1 ShuffleBytes- 00:25:28.744 [2024-06-10 11:36:12.593666] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:302f2f36 cdw11:d0d0d0d0 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.744 [2024-06-10 11:36:12.593692] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:28.744 [2024-06-10 11:36:12.593753] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:d0d0d0d0 cdw11:d0d0d057 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.744 [2024-06-10 11:36:12.593767] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:28.744 #54 NEW cov: 12100 ft: 15266 corp: 32/630b lim: 40 exec/s: 54 rss: 74Mb L: 17/40 MS: 1 ChangeBinInt- 00:25:28.744 [2024-06-10 11:36:12.633736] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:d0d028d0 cdw11:d0ffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.744 [2024-06-10 11:36:12.633761] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:28.744 [2024-06-10 11:36:12.633821] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffd0d0d0 cdw11:d0d0d0d0 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.744 [2024-06-10 11:36:12.633835] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:28.744 #55 NEW cov: 12100 ft: 15346 corp: 33/651b lim: 40 exec/s: 55 rss: 74Mb L: 21/40 MS: 1 ChangeBit- 00:25:28.744 [2024-06-10 11:36:12.683719] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:2e0abdff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.744 [2024-06-10 11:36:12.683745] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:29.003 #56 NEW cov: 12100 ft: 15351 corp: 34/662b lim: 40 exec/s: 56 rss: 74Mb L: 11/40 MS: 1 ChangeByte- 00:25:29.003 [2024-06-10 11:36:12.724470] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:49000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.003 [2024-06-10 11:36:12.724497] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:29.003 [2024-06-10 11:36:12.724559] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00d0d057 cdw11:d09f9fd0 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.003 [2024-06-10 11:36:12.724573] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:29.003 [2024-06-10 11:36:12.724632] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:9f9f9f9f cdw11:9f9f9f9f SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.003 [2024-06-10 11:36:12.724646] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:29.003 [2024-06-10 11:36:12.724705] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:9f9f9f9f cdw11:9f9f9f9f SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.003 [2024-06-10 11:36:12.724719] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:29.003 [2024-06-10 11:36:12.724775] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:8 nsid:0 cdw10:9f9f9f9f cdw11:9fd0570a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.003 [2024-06-10 11:36:12.724789] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:29.003 #57 NEW cov: 12100 ft: 15402 corp: 35/702b lim: 40 exec/s: 57 rss: 74Mb L: 40/40 MS: 1 CopyPart- 00:25:29.003 [2024-06-10 11:36:12.764101] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:2e0a49d0 cdw11:d0d0ffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.003 [2024-06-10 11:36:12.764127] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:29.003 [2024-06-10 11:36:12.764185] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffff57d0 cdw11:d0d0d000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.003 [2024-06-10 11:36:12.764199] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:29.003 #58 NEW cov: 12100 ft: 15414 corp: 36/725b lim: 40 exec/s: 58 rss: 74Mb L: 23/40 MS: 1 PersAutoDict- DE: "\377\377\377\377"- 00:25:29.003 [2024-06-10 11:36:12.814069] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.003 [2024-06-10 11:36:12.814095] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:29.003 #59 NEW cov: 12100 ft: 15428 corp: 37/736b lim: 40 exec/s: 59 rss: 74Mb L: 11/40 MS: 1 ChangeBinInt- 00:25:29.003 [2024-06-10 11:36:12.854381] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:d0d0d0d0 cdw11:d0ffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.003 [2024-06-10 11:36:12.854408] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:29.003 [2024-06-10 11:36:12.854467] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ff46d0d0 cdw11:d0d0d0d0 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.003 [2024-06-10 11:36:12.854481] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:29.003 #60 NEW cov: 12100 ft: 15456 corp: 38/757b lim: 40 exec/s: 60 rss: 75Mb L: 21/40 MS: 1 ChangeByte- 00:25:29.003 [2024-06-10 11:36:12.894468] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:2e0a49d0 cdw11:d0d0ffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.003 [2024-06-10 11:36:12.894498] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:29.003 [2024-06-10 11:36:12.894557] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffff57d0 cdw11:d0d0d000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.003 [2024-06-10 11:36:12.894571] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:29.003 #61 NEW cov: 12100 ft: 15478 corp: 39/780b lim: 40 exec/s: 30 rss: 75Mb L: 23/40 MS: 1 ChangeBinInt- 00:25:29.003 #61 DONE cov: 12100 ft: 15478 corp: 39/780b lim: 40 exec/s: 30 rss: 75Mb 00:25:29.003 ###### Recommended dictionary. ###### 00:25:29.003 "I\000\000\000\000\000\000\000" # Uses: 1 00:25:29.003 "\377\377\377\377" # Uses: 2 00:25:29.003 "\377\377\377\377\377\377\377\377" # Uses: 1 00:25:29.003 ###### End of recommended dictionary. ###### 00:25:29.003 Done 61 runs in 2 second(s) 00:25:29.262 11:36:13 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_11.conf /var/tmp/suppress_nvmf_fuzz 00:25:29.262 11:36:13 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:25:29.262 11:36:13 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:25:29.262 11:36:13 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 12 1 0x1 00:25:29.262 11:36:13 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=12 00:25:29.262 11:36:13 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:25:29.262 11:36:13 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:25:29.262 11:36:13 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 00:25:29.262 11:36:13 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_12.conf 00:25:29.262 11:36:13 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:25:29.262 11:36:13 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:25:29.262 11:36:13 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 12 00:25:29.262 11:36:13 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4412 00:25:29.262 11:36:13 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 00:25:29.262 11:36:13 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4412' 00:25:29.262 11:36:13 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4412"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:25:29.262 11:36:13 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:25:29.262 11:36:13 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:25:29.262 11:36:13 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4412' -c /tmp/fuzz_json_12.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 -Z 12 00:25:29.262 [2024-06-10 11:36:13.124395] Starting SPDK v24.09-pre git sha1 a3f6419f1 / DPDK 24.03.0 initialization... 00:25:29.262 [2024-06-10 11:36:13.124475] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3333225 ] 00:25:29.262 EAL: No free 2048 kB hugepages reported on node 1 00:25:29.521 [2024-06-10 11:36:13.455460] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:29.778 [2024-06-10 11:36:13.552066] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:25:29.778 [2024-06-10 11:36:13.612353] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:29.778 [2024-06-10 11:36:13.628602] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4412 *** 00:25:29.778 INFO: Running with entropic power schedule (0xFF, 100). 00:25:29.778 INFO: Seed: 839080500 00:25:29.778 INFO: Loaded 1 modules (357716 inline 8-bit counters): 357716 [0x29a784c, 0x29feda0), 00:25:29.778 INFO: Loaded 1 PC tables (357716 PCs): 357716 [0x29feda0,0x2f742e0), 00:25:29.778 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 00:25:29.778 INFO: A corpus is not provided, starting from an empty corpus 00:25:29.778 #2 INITED exec/s: 0 rss: 65Mb 00:25:29.779 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:25:29.779 This may also happen if the target rejected all inputs we tried so far 00:25:29.779 [2024-06-10 11:36:13.683960] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0ac2c2c2 cdw11:c2c2c244 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.779 [2024-06-10 11:36:13.683989] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:30.344 NEW_FUNC[1/687]: 0x4937d0 in fuzz_admin_directive_send_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:241 00:25:30.344 NEW_FUNC[2/687]: 0x4bf580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:25:30.344 #6 NEW cov: 11850 ft: 11851 corp: 2/10b lim: 40 exec/s: 0 rss: 72Mb L: 9/9 MS: 4 CrossOver-CopyPart-InsertByte-InsertRepeatedBytes- 00:25:30.344 [2024-06-10 11:36:14.026355] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a0ac2c2 cdw11:c2c2c2c2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.344 [2024-06-10 11:36:14.026406] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:30.344 #12 NEW cov: 11984 ft: 12512 corp: 3/20b lim: 40 exec/s: 0 rss: 72Mb L: 10/10 MS: 1 CrossOver- 00:25:30.344 [2024-06-10 11:36:14.077447] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.344 [2024-06-10 11:36:14.077475] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:30.344 [2024-06-10 11:36:14.077564] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.344 [2024-06-10 11:36:14.077580] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:30.344 [2024-06-10 11:36:14.077673] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:ffffff0a cdw11:c2c2c2c2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.344 [2024-06-10 11:36:14.077690] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:30.344 #13 NEW cov: 11990 ft: 13503 corp: 4/48b lim: 40 exec/s: 0 rss: 72Mb L: 28/28 MS: 1 InsertRepeatedBytes- 00:25:30.344 [2024-06-10 11:36:14.147389] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0ac2c2c2 cdw11:0ac2c2c2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.344 [2024-06-10 11:36:14.147417] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:30.344 [2024-06-10 11:36:14.147508] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:c2c2c244 cdw11:0ac2c2c2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.344 [2024-06-10 11:36:14.147525] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:30.344 #14 NEW cov: 12075 ft: 13906 corp: 5/66b lim: 40 exec/s: 0 rss: 72Mb L: 18/28 MS: 1 CrossOver- 00:25:30.344 [2024-06-10 11:36:14.207502] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0ac2c2c2 cdw11:c2c2c209 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.344 [2024-06-10 11:36:14.207530] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:30.344 #15 NEW cov: 12075 ft: 14040 corp: 6/75b lim: 40 exec/s: 0 rss: 72Mb L: 9/28 MS: 1 ChangeBinInt- 00:25:30.344 [2024-06-10 11:36:14.258304] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffff0a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.344 [2024-06-10 11:36:14.258331] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:30.344 [2024-06-10 11:36:14.258421] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:c2c2c2c2 cdw11:c2c2440a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.344 [2024-06-10 11:36:14.258438] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:30.344 #16 NEW cov: 12075 ft: 14075 corp: 7/91b lim: 40 exec/s: 0 rss: 73Mb L: 16/28 MS: 1 EraseBytes- 00:25:30.603 [2024-06-10 11:36:14.318547] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffff0a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.603 [2024-06-10 11:36:14.318575] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:30.603 [2024-06-10 11:36:14.318666] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:c2c2c2c2 cdw11:c2c23044 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.603 [2024-06-10 11:36:14.318684] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:30.603 #17 NEW cov: 12075 ft: 14168 corp: 8/108b lim: 40 exec/s: 0 rss: 73Mb L: 17/28 MS: 1 InsertByte- 00:25:30.603 [2024-06-10 11:36:14.379846] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.603 [2024-06-10 11:36:14.379873] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:30.603 [2024-06-10 11:36:14.379963] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.603 [2024-06-10 11:36:14.379978] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:30.603 [2024-06-10 11:36:14.380069] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.603 [2024-06-10 11:36:14.380085] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:30.603 [2024-06-10 11:36:14.380177] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.603 [2024-06-10 11:36:14.380193] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:30.603 [2024-06-10 11:36:14.380288] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:8 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.603 [2024-06-10 11:36:14.380304] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:30.603 #22 NEW cov: 12075 ft: 14549 corp: 9/148b lim: 40 exec/s: 0 rss: 73Mb L: 40/40 MS: 5 CopyPart-CrossOver-ChangeBit-EraseBytes-InsertRepeatedBytes- 00:25:30.603 [2024-06-10 11:36:14.428662] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a0ac2c2 cdw11:c2c2c2c2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.603 [2024-06-10 11:36:14.428689] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:30.603 #23 NEW cov: 12075 ft: 14610 corp: 10/160b lim: 40 exec/s: 0 rss: 73Mb L: 12/40 MS: 1 CopyPart- 00:25:30.603 [2024-06-10 11:36:14.489580] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:11000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.603 [2024-06-10 11:36:14.489613] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:30.603 [2024-06-10 11:36:14.489698] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:c2c2c2c2 cdw11:c2c23044 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.603 [2024-06-10 11:36:14.489713] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:30.603 #24 NEW cov: 12075 ft: 14640 corp: 11/177b lim: 40 exec/s: 0 rss: 73Mb L: 17/40 MS: 1 ChangeBinInt- 00:25:30.603 [2024-06-10 11:36:14.550127] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.603 [2024-06-10 11:36:14.550152] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:30.603 [2024-06-10 11:36:14.550233] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.603 [2024-06-10 11:36:14.550253] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:30.603 [2024-06-10 11:36:14.550350] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:ffffff0a cdw11:c2c2e0c2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.603 [2024-06-10 11:36:14.550365] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:30.862 NEW_FUNC[1/1]: 0x1a77150 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:601 00:25:30.862 #25 NEW cov: 12098 ft: 14678 corp: 12/206b lim: 40 exec/s: 0 rss: 73Mb L: 29/40 MS: 1 InsertByte- 00:25:30.862 [2024-06-10 11:36:14.600409] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffff0a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.862 [2024-06-10 11:36:14.600448] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:30.862 [2024-06-10 11:36:14.600535] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:c2c2c2c2 cdw11:c2c23044 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.862 [2024-06-10 11:36:14.600551] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:30.862 [2024-06-10 11:36:14.600637] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:0a0ac2c2 cdw11:c2c2c2c2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.862 [2024-06-10 11:36:14.600653] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:30.862 #26 NEW cov: 12098 ft: 14816 corp: 13/232b lim: 40 exec/s: 0 rss: 73Mb L: 26/40 MS: 1 CrossOver- 00:25:30.862 [2024-06-10 11:36:14.651375] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.862 [2024-06-10 11:36:14.651401] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:30.862 [2024-06-10 11:36:14.651485] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.862 [2024-06-10 11:36:14.651501] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:30.862 [2024-06-10 11:36:14.651586] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.862 [2024-06-10 11:36:14.651602] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:30.862 [2024-06-10 11:36:14.651696] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.862 [2024-06-10 11:36:14.651711] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:30.862 [2024-06-10 11:36:14.651800] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:8 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.862 [2024-06-10 11:36:14.651817] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:30.862 #27 NEW cov: 12098 ft: 14868 corp: 14/272b lim: 40 exec/s: 27 rss: 73Mb L: 40/40 MS: 1 ShuffleBytes- 00:25:30.862 [2024-06-10 11:36:14.720951] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffff0a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.862 [2024-06-10 11:36:14.720978] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:30.862 [2024-06-10 11:36:14.721066] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:c5c2c2c2 cdw11:c2c23044 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.862 [2024-06-10 11:36:14.721084] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:30.862 [2024-06-10 11:36:14.721164] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:0a0ac2c2 cdw11:c2c2c2c2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.862 [2024-06-10 11:36:14.721179] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:30.862 #28 NEW cov: 12098 ft: 14929 corp: 15/298b lim: 40 exec/s: 28 rss: 73Mb L: 26/40 MS: 1 ChangeBinInt- 00:25:30.862 [2024-06-10 11:36:14.780276] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0ac22ac2 cdw11:c2c2440a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.862 [2024-06-10 11:36:14.780303] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:30.862 #30 NEW cov: 12098 ft: 14961 corp: 16/306b lim: 40 exec/s: 30 rss: 73Mb L: 8/40 MS: 2 EraseBytes-InsertByte- 00:25:31.121 [2024-06-10 11:36:14.831632] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:ffff440a cdw11:0ac2c2c2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.121 [2024-06-10 11:36:14.831658] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:31.121 [2024-06-10 11:36:14.831755] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:c2c2c244 cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.121 [2024-06-10 11:36:14.831773] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:31.121 [2024-06-10 11:36:14.831865] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:ff0ac2c2 cdw11:c2c2c2c2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.121 [2024-06-10 11:36:14.831880] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:31.121 [2024-06-10 11:36:14.831967] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:30440a0a cdw11:c2c2c2c2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.121 [2024-06-10 11:36:14.831983] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:31.121 #31 NEW cov: 12098 ft: 15006 corp: 17/342b lim: 40 exec/s: 31 rss: 73Mb L: 36/40 MS: 1 CopyPart- 00:25:31.121 [2024-06-10 11:36:14.882155] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.121 [2024-06-10 11:36:14.882183] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:31.121 [2024-06-10 11:36:14.882278] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.121 [2024-06-10 11:36:14.882294] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:31.121 [2024-06-10 11:36:14.882385] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.121 [2024-06-10 11:36:14.882401] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:31.121 [2024-06-10 11:36:14.882492] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.121 [2024-06-10 11:36:14.882509] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:31.121 [2024-06-10 11:36:14.882596] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:8 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.121 [2024-06-10 11:36:14.882611] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:31.121 #32 NEW cov: 12098 ft: 15055 corp: 18/382b lim: 40 exec/s: 32 rss: 73Mb L: 40/40 MS: 1 ShuffleBytes- 00:25:31.121 [2024-06-10 11:36:14.950909] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:c2c2c2c2 cdw11:0ac2c209 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.122 [2024-06-10 11:36:14.950934] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:31.122 #33 NEW cov: 12098 ft: 15066 corp: 19/391b lim: 40 exec/s: 33 rss: 73Mb L: 9/40 MS: 1 ShuffleBytes- 00:25:31.122 [2024-06-10 11:36:15.011013] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:2ac20ac2 cdw11:c2c2440a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.122 [2024-06-10 11:36:15.011038] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:31.122 #34 NEW cov: 12098 ft: 15120 corp: 20/399b lim: 40 exec/s: 34 rss: 74Mb L: 8/40 MS: 1 ShuffleBytes- 00:25:31.122 [2024-06-10 11:36:15.071424] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:c2c2c2c2 cdw11:0ad0c209 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.380 [2024-06-10 11:36:15.071452] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:31.380 #35 NEW cov: 12098 ft: 15131 corp: 21/408b lim: 40 exec/s: 35 rss: 74Mb L: 9/40 MS: 1 ChangeByte- 00:25:31.380 [2024-06-10 11:36:15.132374] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffff0a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.380 [2024-06-10 11:36:15.132411] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:31.380 [2024-06-10 11:36:15.132501] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:c5c2c2c2 cdw11:c2c23044 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.380 [2024-06-10 11:36:15.132519] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:31.380 [2024-06-10 11:36:15.132613] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:0a0ac2c2 cdw11:c2c2c3c2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.380 [2024-06-10 11:36:15.132630] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:31.380 #36 NEW cov: 12098 ft: 15149 corp: 22/434b lim: 40 exec/s: 36 rss: 74Mb L: 26/40 MS: 1 ChangeBit- 00:25:31.380 [2024-06-10 11:36:15.191781] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:292ac20a cdw11:c2c2c244 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.380 [2024-06-10 11:36:15.191808] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:31.380 #37 NEW cov: 12098 ft: 15167 corp: 23/443b lim: 40 exec/s: 37 rss: 74Mb L: 9/40 MS: 1 InsertByte- 00:25:31.380 [2024-06-10 11:36:15.252388] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:11020000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.380 [2024-06-10 11:36:15.252415] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:31.380 [2024-06-10 11:36:15.252506] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:c2c2c2c2 cdw11:c2c23044 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.380 [2024-06-10 11:36:15.252522] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:31.380 #38 NEW cov: 12098 ft: 15199 corp: 24/460b lim: 40 exec/s: 38 rss: 74Mb L: 17/40 MS: 1 ChangeBit- 00:25:31.381 [2024-06-10 11:36:15.312541] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:11020000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.381 [2024-06-10 11:36:15.312568] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:31.381 [2024-06-10 11:36:15.312654] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:c2c2c2c2 cdw11:c2c23021 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.381 [2024-06-10 11:36:15.312670] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:31.640 #39 NEW cov: 12098 ft: 15241 corp: 25/477b lim: 40 exec/s: 39 rss: 74Mb L: 17/40 MS: 1 ChangeByte- 00:25:31.640 [2024-06-10 11:36:15.373263] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0ac22ac2 cdw11:ebebebeb SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.640 [2024-06-10 11:36:15.373289] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:31.640 [2024-06-10 11:36:15.373376] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:ebebebeb cdw11:ebebebeb SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.640 [2024-06-10 11:36:15.373392] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:31.640 [2024-06-10 11:36:15.373476] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:ebebebeb cdw11:ebebc2c2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.640 [2024-06-10 11:36:15.373492] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:31.640 #40 NEW cov: 12098 ft: 15245 corp: 26/503b lim: 40 exec/s: 40 rss: 74Mb L: 26/40 MS: 1 InsertRepeatedBytes- 00:25:31.640 [2024-06-10 11:36:15.422976] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffff0a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.640 [2024-06-10 11:36:15.423004] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:31.640 [2024-06-10 11:36:15.423097] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:c2c2e0c2 cdw11:c2c2c244 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.640 [2024-06-10 11:36:15.423114] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:31.640 #41 NEW cov: 12098 ft: 15257 corp: 27/520b lim: 40 exec/s: 41 rss: 74Mb L: 17/40 MS: 1 EraseBytes- 00:25:31.640 [2024-06-10 11:36:15.483332] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffff0a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.640 [2024-06-10 11:36:15.483361] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:31.640 [2024-06-10 11:36:15.483453] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:c2c2e0c2 cdw11:c2c2c244 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.640 [2024-06-10 11:36:15.483469] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:31.640 #42 NEW cov: 12098 ft: 15293 corp: 28/537b lim: 40 exec/s: 42 rss: 74Mb L: 17/40 MS: 1 ShuffleBytes- 00:25:31.640 [2024-06-10 11:36:15.544353] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0ac2ffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.641 [2024-06-10 11:36:15.544380] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:31.641 [2024-06-10 11:36:15.544476] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.641 [2024-06-10 11:36:15.544494] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:31.641 [2024-06-10 11:36:15.544587] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.641 [2024-06-10 11:36:15.544603] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:31.641 [2024-06-10 11:36:15.544692] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:ffc2c2c2 cdw11:c2c20900 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.641 [2024-06-10 11:36:15.544710] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:31.641 #43 NEW cov: 12098 ft: 15319 corp: 29/569b lim: 40 exec/s: 43 rss: 74Mb L: 32/40 MS: 1 InsertRepeatedBytes- 00:25:31.899 [2024-06-10 11:36:15.603238] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffff00 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.899 [2024-06-10 11:36:15.603272] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:31.899 #47 NEW cov: 12098 ft: 15412 corp: 30/578b lim: 40 exec/s: 47 rss: 74Mb L: 9/40 MS: 4 CMP-EraseBytes-CopyPart-CopyPart- DE: "\377\377\377\000"- 00:25:31.899 [2024-06-10 11:36:15.653849] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:11000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.899 [2024-06-10 11:36:15.653875] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:31.899 [2024-06-10 11:36:15.653960] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:c2ff7ec2 cdw11:c2c2c2c2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.899 [2024-06-10 11:36:15.653976] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:31.899 #48 NEW cov: 12098 ft: 15421 corp: 31/597b lim: 40 exec/s: 24 rss: 74Mb L: 19/40 MS: 1 CMP- DE: "\377~"- 00:25:31.899 #48 DONE cov: 12098 ft: 15421 corp: 31/597b lim: 40 exec/s: 24 rss: 74Mb 00:25:31.899 ###### Recommended dictionary. ###### 00:25:31.899 "\377\377\377\000" # Uses: 0 00:25:31.899 "\377~" # Uses: 0 00:25:31.899 ###### End of recommended dictionary. ###### 00:25:31.899 Done 48 runs in 2 second(s) 00:25:31.899 11:36:15 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_12.conf /var/tmp/suppress_nvmf_fuzz 00:25:31.899 11:36:15 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:25:31.899 11:36:15 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:25:31.899 11:36:15 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 13 1 0x1 00:25:31.899 11:36:15 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=13 00:25:31.899 11:36:15 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:25:31.899 11:36:15 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:25:31.899 11:36:15 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 00:25:31.899 11:36:15 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_13.conf 00:25:31.899 11:36:15 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:25:31.899 11:36:15 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:25:31.900 11:36:15 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 13 00:25:31.900 11:36:15 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4413 00:25:31.900 11:36:15 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 00:25:31.900 11:36:15 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4413' 00:25:31.900 11:36:15 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4413"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:25:31.900 11:36:15 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:25:31.900 11:36:15 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:25:31.900 11:36:15 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4413' -c /tmp/fuzz_json_13.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 -Z 13 00:25:32.158 [2024-06-10 11:36:15.858494] Starting SPDK v24.09-pre git sha1 a3f6419f1 / DPDK 24.03.0 initialization... 00:25:32.159 [2024-06-10 11:36:15.858569] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3333591 ] 00:25:32.159 EAL: No free 2048 kB hugepages reported on node 1 00:25:32.417 [2024-06-10 11:36:16.152356] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:32.417 [2024-06-10 11:36:16.253750] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:25:32.417 [2024-06-10 11:36:16.314114] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:32.417 [2024-06-10 11:36:16.330392] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4413 *** 00:25:32.417 INFO: Running with entropic power schedule (0xFF, 100). 00:25:32.417 INFO: Seed: 3540098330 00:25:32.417 INFO: Loaded 1 modules (357716 inline 8-bit counters): 357716 [0x29a784c, 0x29feda0), 00:25:32.417 INFO: Loaded 1 PC tables (357716 PCs): 357716 [0x29feda0,0x2f742e0), 00:25:32.417 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 00:25:32.417 INFO: A corpus is not provided, starting from an empty corpus 00:25:32.417 #2 INITED exec/s: 0 rss: 65Mb 00:25:32.417 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:25:32.417 This may also happen if the target rejected all inputs we tried so far 00:25:32.676 [2024-06-10 11:36:16.385813] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.676 [2024-06-10 11:36:16.385842] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:32.676 [2024-06-10 11:36:16.385914] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.676 [2024-06-10 11:36:16.385930] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:32.934 NEW_FUNC[1/686]: 0x495390 in fuzz_admin_directive_receive_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:257 00:25:32.934 NEW_FUNC[2/686]: 0x4bf580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:25:32.934 #8 NEW cov: 11842 ft: 11839 corp: 2/17b lim: 40 exec/s: 0 rss: 72Mb L: 16/16 MS: 1 InsertRepeatedBytes- 00:25:32.934 [2024-06-10 11:36:16.726708] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.934 [2024-06-10 11:36:16.726748] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:32.934 [2024-06-10 11:36:16.726803] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.934 [2024-06-10 11:36:16.726818] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:32.934 #9 NEW cov: 11972 ft: 12391 corp: 3/33b lim: 40 exec/s: 0 rss: 72Mb L: 16/16 MS: 1 ShuffleBytes- 00:25:32.934 [2024-06-10 11:36:16.776674] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.934 [2024-06-10 11:36:16.776704] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:32.934 [2024-06-10 11:36:16.776760] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00030000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.934 [2024-06-10 11:36:16.776773] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:32.934 #10 NEW cov: 11978 ft: 12695 corp: 4/49b lim: 40 exec/s: 0 rss: 72Mb L: 16/16 MS: 1 ChangeBinInt- 00:25:32.934 [2024-06-10 11:36:16.816893] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:7f7f7f7f cdw11:7f7f7f7f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.934 [2024-06-10 11:36:16.816918] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:32.934 [2024-06-10 11:36:16.816989] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:7f7f7f7f cdw11:7f7f7f7f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.934 [2024-06-10 11:36:16.817003] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:32.934 [2024-06-10 11:36:16.817056] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:7f7f7f7f cdw11:7f7f7f7f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.934 [2024-06-10 11:36:16.817069] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:32.934 #18 NEW cov: 12063 ft: 13227 corp: 5/77b lim: 40 exec/s: 0 rss: 72Mb L: 28/28 MS: 3 ShuffleBytes-CopyPart-InsertRepeatedBytes- 00:25:32.934 [2024-06-10 11:36:16.857084] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a000a00 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.934 [2024-06-10 11:36:16.857112] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:32.934 [2024-06-10 11:36:16.857168] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.934 [2024-06-10 11:36:16.857183] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:32.934 [2024-06-10 11:36:16.857238] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.934 [2024-06-10 11:36:16.857258] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:33.193 #19 NEW cov: 12063 ft: 13305 corp: 6/107b lim: 40 exec/s: 0 rss: 72Mb L: 30/30 MS: 1 CrossOver- 00:25:33.193 [2024-06-10 11:36:16.907055] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:1a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.193 [2024-06-10 11:36:16.907082] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:33.193 [2024-06-10 11:36:16.907138] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00030000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.193 [2024-06-10 11:36:16.907152] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:33.193 #20 NEW cov: 12063 ft: 13465 corp: 7/123b lim: 40 exec/s: 0 rss: 72Mb L: 16/30 MS: 1 ChangeBit- 00:25:33.193 [2024-06-10 11:36:16.957185] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:1a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.193 [2024-06-10 11:36:16.957211] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:33.193 [2024-06-10 11:36:16.957266] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.193 [2024-06-10 11:36:16.957282] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:33.193 #21 NEW cov: 12063 ft: 13495 corp: 8/142b lim: 40 exec/s: 0 rss: 72Mb L: 19/30 MS: 1 CopyPart- 00:25:33.193 [2024-06-10 11:36:17.007355] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:000000f6 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.193 [2024-06-10 11:36:17.007382] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:33.193 [2024-06-10 11:36:17.007438] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.193 [2024-06-10 11:36:17.007452] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:33.193 #22 NEW cov: 12063 ft: 13516 corp: 9/158b lim: 40 exec/s: 0 rss: 72Mb L: 16/30 MS: 1 ChangeBinInt- 00:25:33.193 [2024-06-10 11:36:17.047448] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.193 [2024-06-10 11:36:17.047474] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:33.193 [2024-06-10 11:36:17.047529] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.193 [2024-06-10 11:36:17.047542] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:33.193 #23 NEW cov: 12063 ft: 13570 corp: 10/174b lim: 40 exec/s: 0 rss: 72Mb L: 16/30 MS: 1 CopyPart- 00:25:33.193 [2024-06-10 11:36:17.087576] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.193 [2024-06-10 11:36:17.087602] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:33.193 [2024-06-10 11:36:17.087658] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000092 cdw11:00030000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.193 [2024-06-10 11:36:17.087672] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:33.193 #24 NEW cov: 12063 ft: 13638 corp: 11/190b lim: 40 exec/s: 0 rss: 72Mb L: 16/30 MS: 1 ChangeByte- 00:25:33.193 [2024-06-10 11:36:17.127669] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:14000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.193 [2024-06-10 11:36:17.127694] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:33.193 [2024-06-10 11:36:17.127748] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000092 cdw11:00030000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.193 [2024-06-10 11:36:17.127762] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:33.451 #25 NEW cov: 12063 ft: 13700 corp: 12/206b lim: 40 exec/s: 0 rss: 72Mb L: 16/30 MS: 1 ChangeBinInt- 00:25:33.451 [2024-06-10 11:36:17.177841] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.451 [2024-06-10 11:36:17.177866] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:33.451 [2024-06-10 11:36:17.177936] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:0000ff06 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.451 [2024-06-10 11:36:17.177950] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:33.451 #26 NEW cov: 12063 ft: 13708 corp: 13/222b lim: 40 exec/s: 0 rss: 73Mb L: 16/30 MS: 1 CMP- DE: "\377\006"- 00:25:33.451 [2024-06-10 11:36:17.218078] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a000a00 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.451 [2024-06-10 11:36:17.218103] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:33.451 [2024-06-10 11:36:17.218159] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.451 [2024-06-10 11:36:17.218173] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:33.451 [2024-06-10 11:36:17.218227] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:3b000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.452 [2024-06-10 11:36:17.218240] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:33.452 #27 NEW cov: 12063 ft: 13805 corp: 14/253b lim: 40 exec/s: 0 rss: 73Mb L: 31/31 MS: 1 InsertByte- 00:25:33.452 [2024-06-10 11:36:17.268074] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:1a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.452 [2024-06-10 11:36:17.268099] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:33.452 [2024-06-10 11:36:17.268156] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00bbbbbb cdw11:bbbbbb00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.452 [2024-06-10 11:36:17.268170] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:33.452 NEW_FUNC[1/1]: 0x1a77150 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:601 00:25:33.452 #33 NEW cov: 12086 ft: 13858 corp: 15/275b lim: 40 exec/s: 0 rss: 73Mb L: 22/31 MS: 1 InsertRepeatedBytes- 00:25:33.452 [2024-06-10 11:36:17.308205] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:1a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.452 [2024-06-10 11:36:17.308230] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:33.452 [2024-06-10 11:36:17.308309] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.452 [2024-06-10 11:36:17.308324] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:33.452 #34 NEW cov: 12086 ft: 13878 corp: 16/294b lim: 40 exec/s: 0 rss: 73Mb L: 19/31 MS: 1 ShuffleBytes- 00:25:33.452 [2024-06-10 11:36:17.358501] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.452 [2024-06-10 11:36:17.358526] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:33.452 [2024-06-10 11:36:17.358584] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000092 cdw11:0a000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.452 [2024-06-10 11:36:17.358598] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:33.452 [2024-06-10 11:36:17.358652] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000092 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.452 [2024-06-10 11:36:17.358666] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:33.452 #35 NEW cov: 12086 ft: 13898 corp: 17/322b lim: 40 exec/s: 35 rss: 73Mb L: 28/31 MS: 1 CopyPart- 00:25:33.452 [2024-06-10 11:36:17.398347] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000092 cdw11:00030000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.452 [2024-06-10 11:36:17.398373] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:33.710 #36 NEW cov: 12086 ft: 14229 corp: 18/330b lim: 40 exec/s: 36 rss: 73Mb L: 8/31 MS: 1 EraseBytes- 00:25:33.710 [2024-06-10 11:36:17.438597] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.710 [2024-06-10 11:36:17.438621] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:33.710 [2024-06-10 11:36:17.438692] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00009800 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.710 [2024-06-10 11:36:17.438706] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:33.710 #37 NEW cov: 12086 ft: 14259 corp: 19/346b lim: 40 exec/s: 37 rss: 73Mb L: 16/31 MS: 1 ChangeByte- 00:25:33.710 [2024-06-10 11:36:17.478615] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:08000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.710 [2024-06-10 11:36:17.478640] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:33.710 #38 NEW cov: 12086 ft: 14288 corp: 20/354b lim: 40 exec/s: 38 rss: 73Mb L: 8/31 MS: 1 ChangeBinInt- 00:25:33.710 [2024-06-10 11:36:17.528873] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:060a0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.710 [2024-06-10 11:36:17.528898] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:33.710 [2024-06-10 11:36:17.528967] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:000000ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.710 [2024-06-10 11:36:17.528982] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:33.710 #39 NEW cov: 12086 ft: 14296 corp: 21/371b lim: 40 exec/s: 39 rss: 73Mb L: 17/31 MS: 1 CopyPart- 00:25:33.710 [2024-06-10 11:36:17.578961] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:1a000000 cdw11:20000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.710 [2024-06-10 11:36:17.578986] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:33.710 [2024-06-10 11:36:17.579058] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.710 [2024-06-10 11:36:17.579073] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:33.710 #40 NEW cov: 12086 ft: 14359 corp: 22/390b lim: 40 exec/s: 40 rss: 73Mb L: 19/31 MS: 1 ChangeBit- 00:25:33.710 [2024-06-10 11:36:17.629117] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.710 [2024-06-10 11:36:17.629142] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:33.710 [2024-06-10 11:36:17.629213] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000040 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.710 [2024-06-10 11:36:17.629228] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:33.710 #41 NEW cov: 12086 ft: 14363 corp: 23/406b lim: 40 exec/s: 41 rss: 73Mb L: 16/31 MS: 1 ChangeBit- 00:25:33.969 [2024-06-10 11:36:17.669255] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000092 cdw11:000edd5b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.969 [2024-06-10 11:36:17.669280] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:33.969 [2024-06-10 11:36:17.669335] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:2b100ef8 cdw11:00030000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.969 [2024-06-10 11:36:17.669349] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:33.969 #42 NEW cov: 12086 ft: 14378 corp: 24/422b lim: 40 exec/s: 42 rss: 73Mb L: 16/31 MS: 1 CMP- DE: "\000\016\335[+\020\016\370"- 00:25:33.969 [2024-06-10 11:36:17.709375] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:1a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.969 [2024-06-10 11:36:17.709400] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:33.969 [2024-06-10 11:36:17.709456] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.969 [2024-06-10 11:36:17.709470] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:33.969 #43 NEW cov: 12086 ft: 14404 corp: 25/441b lim: 40 exec/s: 43 rss: 73Mb L: 19/31 MS: 1 CopyPart- 00:25:33.969 [2024-06-10 11:36:17.749436] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:0000003f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.969 [2024-06-10 11:36:17.749460] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:33.969 [2024-06-10 11:36:17.749532] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000098 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.969 [2024-06-10 11:36:17.749546] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:33.969 #44 NEW cov: 12086 ft: 14410 corp: 26/458b lim: 40 exec/s: 44 rss: 73Mb L: 17/31 MS: 1 InsertByte- 00:25:33.969 [2024-06-10 11:36:17.799616] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:1a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.969 [2024-06-10 11:36:17.799640] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:33.969 [2024-06-10 11:36:17.799708] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:005bbbbb cdw11:bbbbbb00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.969 [2024-06-10 11:36:17.799722] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:33.969 #45 NEW cov: 12086 ft: 14419 corp: 27/480b lim: 40 exec/s: 45 rss: 73Mb L: 22/31 MS: 1 ChangeByte- 00:25:33.969 [2024-06-10 11:36:17.849800] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.969 [2024-06-10 11:36:17.849825] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:33.969 [2024-06-10 11:36:17.849896] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000106 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.969 [2024-06-10 11:36:17.849910] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:33.969 #46 NEW cov: 12086 ft: 14472 corp: 28/496b lim: 40 exec/s: 46 rss: 73Mb L: 16/31 MS: 1 ChangeBinInt- 00:25:33.969 [2024-06-10 11:36:17.889985] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a000a00 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.969 [2024-06-10 11:36:17.890010] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:33.969 [2024-06-10 11:36:17.890080] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.969 [2024-06-10 11:36:17.890094] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:33.969 [2024-06-10 11:36:17.890147] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00ff0600 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.969 [2024-06-10 11:36:17.890160] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:33.969 #47 NEW cov: 12086 ft: 14484 corp: 29/526b lim: 40 exec/s: 47 rss: 73Mb L: 30/31 MS: 1 PersAutoDict- DE: "\377\006"- 00:25:34.228 [2024-06-10 11:36:17.929974] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:1a000000 cdw11:00000020 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.228 [2024-06-10 11:36:17.929999] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:34.228 [2024-06-10 11:36:17.930054] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00bbbbbb cdw11:bbbbbb00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.228 [2024-06-10 11:36:17.930069] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:34.228 #48 NEW cov: 12086 ft: 14537 corp: 30/548b lim: 40 exec/s: 48 rss: 73Mb L: 22/31 MS: 1 ChangeBit- 00:25:34.228 [2024-06-10 11:36:17.970262] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.228 [2024-06-10 11:36:17.970288] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:34.228 [2024-06-10 11:36:17.970343] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ff000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.228 [2024-06-10 11:36:17.970360] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:34.228 [2024-06-10 11:36:17.970414] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000004 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.228 [2024-06-10 11:36:17.970428] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:34.228 #49 NEW cov: 12086 ft: 14549 corp: 31/576b lim: 40 exec/s: 49 rss: 73Mb L: 28/31 MS: 1 InsertRepeatedBytes- 00:25:34.228 [2024-06-10 11:36:18.020223] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:1a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.228 [2024-06-10 11:36:18.020252] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:34.228 [2024-06-10 11:36:18.020309] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00bbbbbb cdw11:bbbbbb00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.228 [2024-06-10 11:36:18.020323] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:34.228 #50 NEW cov: 12086 ft: 14579 corp: 32/598b lim: 40 exec/s: 50 rss: 74Mb L: 22/31 MS: 1 ShuffleBytes- 00:25:34.228 [2024-06-10 11:36:18.060207] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.228 [2024-06-10 11:36:18.060232] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:34.228 #51 NEW cov: 12086 ft: 14583 corp: 33/606b lim: 40 exec/s: 51 rss: 74Mb L: 8/31 MS: 1 EraseBytes- 00:25:34.228 [2024-06-10 11:36:18.100474] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:1a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.228 [2024-06-10 11:36:18.100501] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:34.228 [2024-06-10 11:36:18.100560] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:67030000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.228 [2024-06-10 11:36:18.100574] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:34.228 #52 NEW cov: 12086 ft: 14598 corp: 34/622b lim: 40 exec/s: 52 rss: 74Mb L: 16/31 MS: 1 ChangeByte- 00:25:34.228 [2024-06-10 11:36:18.140721] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.228 [2024-06-10 11:36:18.140747] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:34.228 [2024-06-10 11:36:18.140805] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:0000000e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.228 [2024-06-10 11:36:18.140820] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:34.228 [2024-06-10 11:36:18.140875] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:dd5b2b10 cdw11:0ef8ff06 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.228 [2024-06-10 11:36:18.140889] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:34.228 #53 NEW cov: 12086 ft: 14638 corp: 35/646b lim: 40 exec/s: 53 rss: 74Mb L: 24/31 MS: 1 PersAutoDict- DE: "\000\016\335[+\020\016\370"- 00:25:34.487 [2024-06-10 11:36:18.180711] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:060a0000 cdw11:00ff0600 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.487 [2024-06-10 11:36:18.180740] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:34.487 [2024-06-10 11:36:18.180797] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:000000ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.487 [2024-06-10 11:36:18.180813] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:34.487 #54 NEW cov: 12086 ft: 14654 corp: 36/663b lim: 40 exec/s: 54 rss: 74Mb L: 17/31 MS: 1 PersAutoDict- DE: "\377\006"- 00:25:34.487 [2024-06-10 11:36:18.231119] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.487 [2024-06-10 11:36:18.231146] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:34.487 [2024-06-10 11:36:18.231205] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.487 [2024-06-10 11:36:18.231219] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:34.487 [2024-06-10 11:36:18.231281] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:000a0000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.487 [2024-06-10 11:36:18.231295] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:34.487 [2024-06-10 11:36:18.231351] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.487 [2024-06-10 11:36:18.231366] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:34.487 #55 NEW cov: 12086 ft: 15097 corp: 37/700b lim: 40 exec/s: 55 rss: 74Mb L: 37/37 MS: 1 InsertRepeatedBytes- 00:25:34.487 [2024-06-10 11:36:18.271264] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.487 [2024-06-10 11:36:18.271291] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:34.487 [2024-06-10 11:36:18.271349] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ff000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.487 [2024-06-10 11:36:18.271363] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:34.487 [2024-06-10 11:36:18.271417] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000004 cdw11:00ffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.487 [2024-06-10 11:36:18.271432] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:34.487 [2024-06-10 11:36:18.271487] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ff000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.487 [2024-06-10 11:36:18.271500] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:34.487 #56 NEW cov: 12086 ft: 15116 corp: 38/736b lim: 40 exec/s: 56 rss: 74Mb L: 36/37 MS: 1 InsertRepeatedBytes- 00:25:34.487 [2024-06-10 11:36:18.321148] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:1a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.487 [2024-06-10 11:36:18.321175] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:34.487 [2024-06-10 11:36:18.321233] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00bbbbbb cdw11:bbbbbb01 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.487 [2024-06-10 11:36:18.321257] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:34.487 #57 NEW cov: 12086 ft: 15120 corp: 39/758b lim: 40 exec/s: 57 rss: 74Mb L: 22/37 MS: 1 CMP- DE: "\001\030"- 00:25:34.487 [2024-06-10 11:36:18.371585] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.487 [2024-06-10 11:36:18.371612] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:34.487 [2024-06-10 11:36:18.371684] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:000edd5b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.487 [2024-06-10 11:36:18.371698] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:34.487 [2024-06-10 11:36:18.371755] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:2b100ef8 cdw11:0000000e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.487 [2024-06-10 11:36:18.371770] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:34.487 [2024-06-10 11:36:18.371826] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:dd5b2b10 cdw11:0ef8ff06 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.487 [2024-06-10 11:36:18.371839] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:34.487 #58 NEW cov: 12086 ft: 15131 corp: 40/790b lim: 40 exec/s: 29 rss: 74Mb L: 32/37 MS: 1 PersAutoDict- DE: "\000\016\335[+\020\016\370"- 00:25:34.487 #58 DONE cov: 12086 ft: 15131 corp: 40/790b lim: 40 exec/s: 29 rss: 74Mb 00:25:34.488 ###### Recommended dictionary. ###### 00:25:34.488 "\377\006" # Uses: 3 00:25:34.488 "\000\016\335[+\020\016\370" # Uses: 2 00:25:34.488 "\001\030" # Uses: 0 00:25:34.488 ###### End of recommended dictionary. ###### 00:25:34.488 Done 58 runs in 2 second(s) 00:25:34.747 11:36:18 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_13.conf /var/tmp/suppress_nvmf_fuzz 00:25:34.747 11:36:18 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:25:34.747 11:36:18 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:25:34.747 11:36:18 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 14 1 0x1 00:25:34.747 11:36:18 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=14 00:25:34.747 11:36:18 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:25:34.747 11:36:18 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:25:34.747 11:36:18 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 00:25:34.747 11:36:18 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_14.conf 00:25:34.747 11:36:18 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:25:34.747 11:36:18 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:25:34.747 11:36:18 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 14 00:25:34.747 11:36:18 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4414 00:25:34.747 11:36:18 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 00:25:34.747 11:36:18 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4414' 00:25:34.747 11:36:18 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4414"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:25:34.747 11:36:18 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:25:34.747 11:36:18 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:25:34.747 11:36:18 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4414' -c /tmp/fuzz_json_14.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 -Z 14 00:25:34.747 [2024-06-10 11:36:18.600048] Starting SPDK v24.09-pre git sha1 a3f6419f1 / DPDK 24.03.0 initialization... 00:25:34.747 [2024-06-10 11:36:18.600124] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3333951 ] 00:25:34.747 EAL: No free 2048 kB hugepages reported on node 1 00:25:35.006 [2024-06-10 11:36:18.931809] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:35.264 [2024-06-10 11:36:19.033372] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:25:35.264 [2024-06-10 11:36:19.093809] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:35.264 [2024-06-10 11:36:19.110067] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4414 *** 00:25:35.264 INFO: Running with entropic power schedule (0xFF, 100). 00:25:35.264 INFO: Seed: 2026129890 00:25:35.264 INFO: Loaded 1 modules (357716 inline 8-bit counters): 357716 [0x29a784c, 0x29feda0), 00:25:35.264 INFO: Loaded 1 PC tables (357716 PCs): 357716 [0x29feda0,0x2f742e0), 00:25:35.264 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 00:25:35.264 INFO: A corpus is not provided, starting from an empty corpus 00:25:35.264 #2 INITED exec/s: 0 rss: 65Mb 00:25:35.264 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:25:35.264 This may also happen if the target rejected all inputs we tried so far 00:25:35.264 [2024-06-10 11:36:19.165427] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:8000003d SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.264 [2024-06-10 11:36:19.165458] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:35.832 NEW_FUNC[1/687]: 0x496f50 in fuzz_admin_set_features_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:392 00:25:35.832 NEW_FUNC[2/687]: 0x4bf580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:25:35.832 #12 NEW cov: 11835 ft: 11836 corp: 2/9b lim: 35 exec/s: 0 rss: 72Mb L: 8/8 MS: 5 ChangeByte-ShuffleBytes-CopyPart-InsertRepeatedBytes-CopyPart- 00:25:35.832 [2024-06-10 11:36:19.506545] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:8000003d SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.832 [2024-06-10 11:36:19.506591] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:35.832 #13 NEW cov: 11966 ft: 12400 corp: 3/17b lim: 35 exec/s: 0 rss: 72Mb L: 8/8 MS: 1 ChangeByte- 00:25:35.832 [2024-06-10 11:36:19.556521] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.832 [2024-06-10 11:36:19.556552] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:35.832 [2024-06-10 11:36:19.556609] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.832 [2024-06-10 11:36:19.556623] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:35.832 #18 NEW cov: 11979 ft: 13451 corp: 4/33b lim: 35 exec/s: 0 rss: 72Mb L: 16/16 MS: 5 ChangeBit-ChangeByte-InsertByte-ChangeByte-InsertRepeatedBytes- 00:25:35.832 [2024-06-10 11:36:19.596600] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:8000003d SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.832 [2024-06-10 11:36:19.596628] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:35.832 [2024-06-10 11:36:19.596692] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.832 [2024-06-10 11:36:19.596709] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:35.832 #19 NEW cov: 12064 ft: 13671 corp: 5/47b lim: 35 exec/s: 0 rss: 72Mb L: 14/16 MS: 1 InsertRepeatedBytes- 00:25:35.832 [2024-06-10 11:36:19.636722] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:8000003d SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.832 [2024-06-10 11:36:19.636750] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:35.832 [2024-06-10 11:36:19.636810] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.832 [2024-06-10 11:36:19.636827] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:35.832 #20 NEW cov: 12064 ft: 13803 corp: 6/61b lim: 35 exec/s: 0 rss: 72Mb L: 14/16 MS: 1 CopyPart- 00:25:35.832 [2024-06-10 11:36:19.686646] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:8000003d SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.832 [2024-06-10 11:36:19.686675] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:35.832 #21 NEW cov: 12064 ft: 13859 corp: 7/69b lim: 35 exec/s: 0 rss: 72Mb L: 8/16 MS: 1 ShuffleBytes- 00:25:35.832 [2024-06-10 11:36:19.737336] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:8000003d SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.832 [2024-06-10 11:36:19.737365] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:35.832 [2024-06-10 11:36:19.737427] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000b6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.832 [2024-06-10 11:36:19.737444] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:35.832 [2024-06-10 11:36:19.737504] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000b6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.832 [2024-06-10 11:36:19.737520] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:35.832 [2024-06-10 11:36:19.737581] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.832 [2024-06-10 11:36:19.737598] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:35.832 #22 NEW cov: 12064 ft: 14258 corp: 8/97b lim: 35 exec/s: 0 rss: 72Mb L: 28/28 MS: 1 CrossOver- 00:25:36.090 [2024-06-10 11:36:19.787163] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:8000003d SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.090 [2024-06-10 11:36:19.787192] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:36.091 [2024-06-10 11:36:19.787253] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000b6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.091 [2024-06-10 11:36:19.787269] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:36.091 #23 NEW cov: 12064 ft: 14291 corp: 9/112b lim: 35 exec/s: 0 rss: 72Mb L: 15/28 MS: 1 InsertByte- 00:25:36.091 [2024-06-10 11:36:19.827109] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:8000003d SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.091 [2024-06-10 11:36:19.827136] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:36.091 #24 NEW cov: 12064 ft: 14396 corp: 10/124b lim: 35 exec/s: 0 rss: 72Mb L: 12/28 MS: 1 CopyPart- 00:25:36.091 [2024-06-10 11:36:19.867399] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:8000003d SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.091 [2024-06-10 11:36:19.867426] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:36.091 [2024-06-10 11:36:19.867503] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.091 [2024-06-10 11:36:19.867520] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:36.091 #25 NEW cov: 12064 ft: 14440 corp: 11/139b lim: 35 exec/s: 0 rss: 72Mb L: 15/28 MS: 1 InsertByte- 00:25:36.091 NEW_FUNC[1/2]: 0x4b8410 in feat_write_atomicity /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:340 00:25:36.091 NEW_FUNC[2/2]: 0x11e8a00 in nvmf_ctrlr_set_features_write_atomicity /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/ctrlr.c:1761 00:25:36.091 #26 NEW cov: 12097 ft: 14520 corp: 12/147b lim: 35 exec/s: 0 rss: 73Mb L: 8/28 MS: 1 CrossOver- 00:25:36.091 [2024-06-10 11:36:19.947444] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:8000003d SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.091 [2024-06-10 11:36:19.947471] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:36.091 #27 NEW cov: 12097 ft: 14560 corp: 13/155b lim: 35 exec/s: 0 rss: 73Mb L: 8/28 MS: 1 ChangeBit- 00:25:36.091 [2024-06-10 11:36:19.997606] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:8000003d SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.091 [2024-06-10 11:36:19.997632] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:36.091 #29 NEW cov: 12097 ft: 14588 corp: 14/164b lim: 35 exec/s: 0 rss: 73Mb L: 9/28 MS: 2 EraseBytes-CrossOver- 00:25:36.349 [2024-06-10 11:36:20.047956] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:8000003d SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.349 [2024-06-10 11:36:20.047994] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:36.349 [2024-06-10 11:36:20.048050] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.349 [2024-06-10 11:36:20.048068] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:36.349 NEW_FUNC[1/1]: 0x1a77150 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:601 00:25:36.349 #30 NEW cov: 12120 ft: 14638 corp: 15/180b lim: 35 exec/s: 0 rss: 73Mb L: 16/28 MS: 1 InsertByte- 00:25:36.349 [2024-06-10 11:36:20.107903] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:8000003d SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.349 [2024-06-10 11:36:20.107937] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:36.349 #31 NEW cov: 12120 ft: 14655 corp: 16/188b lim: 35 exec/s: 0 rss: 73Mb L: 8/28 MS: 1 ChangeByte- 00:25:36.349 [2024-06-10 11:36:20.148057] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:0000003d SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.349 [2024-06-10 11:36:20.148084] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:36.349 #32 NEW cov: 12120 ft: 14724 corp: 17/198b lim: 35 exec/s: 32 rss: 73Mb L: 10/28 MS: 1 InsertByte- 00:25:36.349 [2024-06-10 11:36:20.198165] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:8000003d SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.349 [2024-06-10 11:36:20.198194] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:36.349 #33 NEW cov: 12120 ft: 14745 corp: 18/206b lim: 35 exec/s: 33 rss: 73Mb L: 8/28 MS: 1 ChangeByte- 00:25:36.349 [2024-06-10 11:36:20.248626] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:8000003d SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.349 [2024-06-10 11:36:20.248655] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:36.349 [2024-06-10 11:36:20.248717] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.349 [2024-06-10 11:36:20.248734] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:36.349 [2024-06-10 11:36:20.248795] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.349 [2024-06-10 11:36:20.248809] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:36.349 #34 NEW cov: 12120 ft: 14959 corp: 19/227b lim: 35 exec/s: 34 rss: 73Mb L: 21/28 MS: 1 InsertRepeatedBytes- 00:25:36.349 [2024-06-10 11:36:20.288852] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:8000003d SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.349 [2024-06-10 11:36:20.288880] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:36.349 [2024-06-10 11:36:20.288958] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ea SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.349 [2024-06-10 11:36:20.288976] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:36.349 [2024-06-10 11:36:20.289036] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ea SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.349 [2024-06-10 11:36:20.289050] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:36.349 [2024-06-10 11:36:20.289107] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000ea SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.349 [2024-06-10 11:36:20.289123] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:36.607 #35 NEW cov: 12120 ft: 14966 corp: 20/257b lim: 35 exec/s: 35 rss: 73Mb L: 30/30 MS: 1 InsertRepeatedBytes- 00:25:36.607 [2024-06-10 11:36:20.328653] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.607 [2024-06-10 11:36:20.328679] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:36.607 [2024-06-10 11:36:20.328742] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.607 [2024-06-10 11:36:20.328756] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:36.607 #36 NEW cov: 12120 ft: 14980 corp: 21/273b lim: 35 exec/s: 36 rss: 73Mb L: 16/30 MS: 1 ChangeBit- 00:25:36.607 [2024-06-10 11:36:20.378834] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:8000003d SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.607 [2024-06-10 11:36:20.378861] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:36.607 [2024-06-10 11:36:20.378940] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.607 [2024-06-10 11:36:20.378957] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:36.607 #37 NEW cov: 12120 ft: 15004 corp: 22/287b lim: 35 exec/s: 37 rss: 73Mb L: 14/30 MS: 1 InsertRepeatedBytes- 00:25:36.607 [2024-06-10 11:36:20.418905] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:8000003d SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.607 [2024-06-10 11:36:20.418932] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:36.607 [2024-06-10 11:36:20.419010] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.607 [2024-06-10 11:36:20.419028] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:36.607 #38 NEW cov: 12120 ft: 15013 corp: 23/301b lim: 35 exec/s: 38 rss: 73Mb L: 14/30 MS: 1 ChangeByte- 00:25:36.607 [2024-06-10 11:36:20.468883] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:8000003d SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.607 [2024-06-10 11:36:20.468910] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:36.607 #44 NEW cov: 12120 ft: 15024 corp: 24/309b lim: 35 exec/s: 44 rss: 73Mb L: 8/30 MS: 1 CrossOver- 00:25:36.607 [2024-06-10 11:36:20.509333] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:8000003d SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.607 [2024-06-10 11:36:20.509360] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:36.607 [2024-06-10 11:36:20.509446] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.607 [2024-06-10 11:36:20.509463] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:36.607 [2024-06-10 11:36:20.509521] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.607 [2024-06-10 11:36:20.509535] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:36.607 #45 NEW cov: 12120 ft: 15056 corp: 25/330b lim: 35 exec/s: 45 rss: 73Mb L: 21/30 MS: 1 ChangeByte- 00:25:36.865 [2024-06-10 11:36:20.559185] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.865 [2024-06-10 11:36:20.559213] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:36.865 #47 NEW cov: 12120 ft: 15125 corp: 26/343b lim: 35 exec/s: 47 rss: 73Mb L: 13/30 MS: 2 EraseBytes-CMP- DE: "\377\015\335\\\267.\371\342"- 00:25:36.865 [2024-06-10 11:36:20.609464] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:8000003d SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.865 [2024-06-10 11:36:20.609490] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:36.865 [2024-06-10 11:36:20.609568] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.865 [2024-06-10 11:36:20.609584] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:36.865 #48 NEW cov: 12120 ft: 15129 corp: 27/357b lim: 35 exec/s: 48 rss: 73Mb L: 14/30 MS: 1 ShuffleBytes- 00:25:36.865 [2024-06-10 11:36:20.659640] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.865 [2024-06-10 11:36:20.659665] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:36.865 [2024-06-10 11:36:20.659742] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000021 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.865 [2024-06-10 11:36:20.659760] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:36.865 #49 NEW cov: 12120 ft: 15136 corp: 28/374b lim: 35 exec/s: 49 rss: 74Mb L: 17/30 MS: 1 InsertByte- 00:25:36.865 [2024-06-10 11:36:20.709745] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.865 [2024-06-10 11:36:20.709770] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:36.865 [2024-06-10 11:36:20.709829] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000021 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.865 [2024-06-10 11:36:20.709842] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:36.865 #50 NEW cov: 12120 ft: 15148 corp: 29/391b lim: 35 exec/s: 50 rss: 74Mb L: 17/30 MS: 1 ChangeBit- 00:25:36.865 [2024-06-10 11:36:20.759867] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:80000040 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.865 [2024-06-10 11:36:20.759894] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:36.865 [2024-06-10 11:36:20.759974] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.865 [2024-06-10 11:36:20.759990] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:36.865 #51 NEW cov: 12120 ft: 15153 corp: 30/405b lim: 35 exec/s: 51 rss: 74Mb L: 14/30 MS: 1 ChangeByte- 00:25:36.865 [2024-06-10 11:36:20.800046] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.865 [2024-06-10 11:36:20.800073] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:36.865 [2024-06-10 11:36:20.800134] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000021 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.865 [2024-06-10 11:36:20.800149] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:37.124 #52 NEW cov: 12120 ft: 15162 corp: 31/423b lim: 35 exec/s: 52 rss: 74Mb L: 18/30 MS: 1 InsertByte- 00:25:37.124 [2024-06-10 11:36:20.840176] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.124 [2024-06-10 11:36:20.840204] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:37.124 [2024-06-10 11:36:20.840261] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.124 [2024-06-10 11:36:20.840276] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:37.124 #53 NEW cov: 12120 ft: 15182 corp: 32/439b lim: 35 exec/s: 53 rss: 74Mb L: 16/30 MS: 1 ChangeByte- 00:25:37.124 [2024-06-10 11:36:20.880579] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:8000003d SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.124 [2024-06-10 11:36:20.880606] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:37.124 [2024-06-10 11:36:20.880685] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ea SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.124 [2024-06-10 11:36:20.880702] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:37.124 [2024-06-10 11:36:20.880764] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ea SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.124 [2024-06-10 11:36:20.880784] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:37.124 [2024-06-10 11:36:20.880846] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000ea SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.124 [2024-06-10 11:36:20.880864] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:37.124 #54 NEW cov: 12120 ft: 15193 corp: 33/469b lim: 35 exec/s: 54 rss: 74Mb L: 30/30 MS: 1 ChangeByte- 00:25:37.124 #62 NEW cov: 12120 ft: 15205 corp: 34/479b lim: 35 exec/s: 62 rss: 74Mb L: 10/30 MS: 3 CopyPart-InsertByte-CMP- DE: "\001\000\000\000\000\000\000\000"- 00:25:37.124 [2024-06-10 11:36:20.970650] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:8000003d SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.124 [2024-06-10 11:36:20.970678] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:37.124 [2024-06-10 11:36:20.970756] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.124 [2024-06-10 11:36:20.970771] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:37.124 [2024-06-10 11:36:20.970831] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000b6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.124 [2024-06-10 11:36:20.970848] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:37.124 #63 NEW cov: 12120 ft: 15212 corp: 35/502b lim: 35 exec/s: 63 rss: 74Mb L: 23/30 MS: 1 PersAutoDict- DE: "\001\000\000\000\000\000\000\000"- 00:25:37.124 [2024-06-10 11:36:21.020802] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:8000003d SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.124 [2024-06-10 11:36:21.020829] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:37.124 [2024-06-10 11:36:21.020905] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.124 [2024-06-10 11:36:21.020923] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:37.124 [2024-06-10 11:36:21.020987] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:8000005c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.124 [2024-06-10 11:36:21.021004] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:37.124 #64 NEW cov: 12120 ft: 15226 corp: 36/524b lim: 35 exec/s: 64 rss: 74Mb L: 22/30 MS: 1 CMP- DE: "\000\016\335\\\366\227\333p"- 00:25:37.124 [2024-06-10 11:36:21.060805] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:8000003d SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.124 [2024-06-10 11:36:21.060834] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:37.124 [2024-06-10 11:36:21.060897] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.124 [2024-06-10 11:36:21.060915] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:37.383 #65 NEW cov: 12120 ft: 15230 corp: 37/540b lim: 35 exec/s: 65 rss: 74Mb L: 16/30 MS: 1 ShuffleBytes- 00:25:37.383 [2024-06-10 11:36:21.110897] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:80000040 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.383 [2024-06-10 11:36:21.110926] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:37.383 [2024-06-10 11:36:21.111003] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.383 [2024-06-10 11:36:21.111024] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:37.383 #66 NEW cov: 12120 ft: 15249 corp: 38/555b lim: 35 exec/s: 33 rss: 74Mb L: 15/30 MS: 1 InsertByte- 00:25:37.383 #66 DONE cov: 12120 ft: 15249 corp: 38/555b lim: 35 exec/s: 33 rss: 74Mb 00:25:37.383 ###### Recommended dictionary. ###### 00:25:37.383 "\377\015\335\\\267.\371\342" # Uses: 0 00:25:37.383 "\001\000\000\000\000\000\000\000" # Uses: 1 00:25:37.383 "\000\016\335\\\366\227\333p" # Uses: 0 00:25:37.383 ###### End of recommended dictionary. ###### 00:25:37.383 Done 66 runs in 2 second(s) 00:25:37.383 11:36:21 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_14.conf /var/tmp/suppress_nvmf_fuzz 00:25:37.383 11:36:21 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:25:37.383 11:36:21 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:25:37.383 11:36:21 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 15 1 0x1 00:25:37.383 11:36:21 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=15 00:25:37.383 11:36:21 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:25:37.383 11:36:21 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:25:37.383 11:36:21 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 00:25:37.383 11:36:21 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_15.conf 00:25:37.383 11:36:21 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:25:37.383 11:36:21 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:25:37.383 11:36:21 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 15 00:25:37.383 11:36:21 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4415 00:25:37.383 11:36:21 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 00:25:37.383 11:36:21 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4415' 00:25:37.383 11:36:21 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4415"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:25:37.383 11:36:21 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:25:37.383 11:36:21 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:25:37.383 11:36:21 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4415' -c /tmp/fuzz_json_15.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 -Z 15 00:25:37.642 [2024-06-10 11:36:21.336544] Starting SPDK v24.09-pre git sha1 a3f6419f1 / DPDK 24.03.0 initialization... 00:25:37.642 [2024-06-10 11:36:21.336627] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3334307 ] 00:25:37.642 EAL: No free 2048 kB hugepages reported on node 1 00:25:37.900 [2024-06-10 11:36:21.666394] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:37.900 [2024-06-10 11:36:21.767138] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:25:37.900 [2024-06-10 11:36:21.827389] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:37.900 [2024-06-10 11:36:21.843645] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4415 *** 00:25:38.200 INFO: Running with entropic power schedule (0xFF, 100). 00:25:38.200 INFO: Seed: 465141663 00:25:38.200 INFO: Loaded 1 modules (357716 inline 8-bit counters): 357716 [0x29a784c, 0x29feda0), 00:25:38.200 INFO: Loaded 1 PC tables (357716 PCs): 357716 [0x29feda0,0x2f742e0), 00:25:38.200 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 00:25:38.200 INFO: A corpus is not provided, starting from an empty corpus 00:25:38.200 #2 INITED exec/s: 0 rss: 65Mb 00:25:38.200 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:25:38.200 This may also happen if the target rejected all inputs we tried so far 00:25:38.200 [2024-06-10 11:36:21.899304] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.200 [2024-06-10 11:36:21.899335] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:38.200 [2024-06-10 11:36:21.899397] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.200 [2024-06-10 11:36:21.899411] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:38.200 [2024-06-10 11:36:21.899471] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.200 [2024-06-10 11:36:21.899486] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:38.481 NEW_FUNC[1/686]: 0x498490 in fuzz_admin_get_features_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:460 00:25:38.481 NEW_FUNC[2/686]: 0x4bf580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:25:38.481 #3 NEW cov: 11824 ft: 11822 corp: 2/27b lim: 35 exec/s: 0 rss: 72Mb L: 26/26 MS: 1 InsertRepeatedBytes- 00:25:38.481 [2024-06-10 11:36:22.250118] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000400 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.481 [2024-06-10 11:36:22.250163] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:38.481 [2024-06-10 11:36:22.250225] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.481 [2024-06-10 11:36:22.250241] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:38.481 [2024-06-10 11:36:22.250306] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.482 [2024-06-10 11:36:22.250321] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:38.482 #9 NEW cov: 11954 ft: 12426 corp: 3/53b lim: 35 exec/s: 0 rss: 72Mb L: 26/26 MS: 1 ChangeBit- 00:25:38.482 [2024-06-10 11:36:22.310140] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000400 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.482 [2024-06-10 11:36:22.310171] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:38.482 [2024-06-10 11:36:22.310229] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.482 [2024-06-10 11:36:22.310242] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:38.482 [2024-06-10 11:36:22.310305] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.482 [2024-06-10 11:36:22.310319] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:38.482 #10 NEW cov: 11960 ft: 12602 corp: 4/79b lim: 35 exec/s: 0 rss: 72Mb L: 26/26 MS: 1 CopyPart- 00:25:38.482 [2024-06-10 11:36:22.360234] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.482 [2024-06-10 11:36:22.360266] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:38.482 [2024-06-10 11:36:22.360326] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.482 [2024-06-10 11:36:22.360341] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:38.482 [2024-06-10 11:36:22.360413] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.482 [2024-06-10 11:36:22.360427] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:38.482 #21 NEW cov: 12045 ft: 12863 corp: 5/105b lim: 35 exec/s: 0 rss: 72Mb L: 26/26 MS: 1 ChangeBinInt- 00:25:38.482 [2024-06-10 11:36:22.400628] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.482 [2024-06-10 11:36:22.400654] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:38.482 [2024-06-10 11:36:22.400710] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.482 [2024-06-10 11:36:22.400726] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:38.482 [2024-06-10 11:36:22.400783] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.482 [2024-06-10 11:36:22.400796] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:38.482 [2024-06-10 11:36:22.400851] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.482 [2024-06-10 11:36:22.400865] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:38.482 [2024-06-10 11:36:22.400919] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:8 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.482 [2024-06-10 11:36:22.400933] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:38.482 #22 NEW cov: 12045 ft: 13519 corp: 6/140b lim: 35 exec/s: 0 rss: 72Mb L: 35/35 MS: 1 CrossOver- 00:25:38.741 [2024-06-10 11:36:22.440470] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000400 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.741 [2024-06-10 11:36:22.440497] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:38.741 [2024-06-10 11:36:22.440570] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.741 [2024-06-10 11:36:22.440585] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:38.741 [2024-06-10 11:36:22.440642] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.741 [2024-06-10 11:36:22.440657] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:38.741 #23 NEW cov: 12045 ft: 13558 corp: 7/166b lim: 35 exec/s: 0 rss: 72Mb L: 26/35 MS: 1 ChangeBinInt- 00:25:38.741 [2024-06-10 11:36:22.490825] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.741 [2024-06-10 11:36:22.490851] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:38.741 [2024-06-10 11:36:22.490906] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.741 [2024-06-10 11:36:22.490920] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:38.741 [2024-06-10 11:36:22.490978] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.741 [2024-06-10 11:36:22.490991] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:38.741 [2024-06-10 11:36:22.491046] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.742 [2024-06-10 11:36:22.491059] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:38.742 [2024-06-10 11:36:22.491113] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:8 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.742 [2024-06-10 11:36:22.491126] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:38.742 #24 NEW cov: 12045 ft: 13687 corp: 8/201b lim: 35 exec/s: 0 rss: 73Mb L: 35/35 MS: 1 ChangeBinInt- 00:25:38.742 [2024-06-10 11:36:22.540723] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.742 [2024-06-10 11:36:22.540748] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:38.742 [2024-06-10 11:36:22.540810] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.742 [2024-06-10 11:36:22.540824] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:38.742 [2024-06-10 11:36:22.540882] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.742 [2024-06-10 11:36:22.540894] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:38.742 #25 NEW cov: 12045 ft: 13733 corp: 9/227b lim: 35 exec/s: 0 rss: 73Mb L: 26/35 MS: 1 CrossOver- 00:25:38.742 [2024-06-10 11:36:22.591118] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.742 [2024-06-10 11:36:22.591142] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:38.742 [2024-06-10 11:36:22.591202] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.742 [2024-06-10 11:36:22.591216] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:38.742 [2024-06-10 11:36:22.591273] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.742 [2024-06-10 11:36:22.591287] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:38.742 [2024-06-10 11:36:22.591345] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.742 [2024-06-10 11:36:22.591358] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:38.742 [2024-06-10 11:36:22.591412] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:8 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.742 [2024-06-10 11:36:22.591425] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:38.742 #26 NEW cov: 12045 ft: 13746 corp: 10/262b lim: 35 exec/s: 0 rss: 73Mb L: 35/35 MS: 1 ChangeBinInt- 00:25:38.742 [2024-06-10 11:36:22.631214] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.742 [2024-06-10 11:36:22.631240] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:38.742 [2024-06-10 11:36:22.631308] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000000c8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.742 [2024-06-10 11:36:22.631323] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:38.742 [2024-06-10 11:36:22.631380] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.742 [2024-06-10 11:36:22.631393] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:38.742 [2024-06-10 11:36:22.631452] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.742 [2024-06-10 11:36:22.631465] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:38.742 [2024-06-10 11:36:22.631521] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:8 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.742 [2024-06-10 11:36:22.631534] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:38.742 #27 NEW cov: 12045 ft: 13788 corp: 11/297b lim: 35 exec/s: 0 rss: 73Mb L: 35/35 MS: 1 ChangeByte- 00:25:38.742 [2024-06-10 11:36:22.671105] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.742 [2024-06-10 11:36:22.671130] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:38.742 [2024-06-10 11:36:22.671188] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.742 [2024-06-10 11:36:22.671202] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:38.742 [2024-06-10 11:36:22.671263] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.742 [2024-06-10 11:36:22.671276] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:39.001 #28 NEW cov: 12045 ft: 13800 corp: 12/324b lim: 35 exec/s: 0 rss: 73Mb L: 27/35 MS: 1 EraseBytes- 00:25:39.001 [2024-06-10 11:36:22.711215] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.001 [2024-06-10 11:36:22.711240] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:39.001 [2024-06-10 11:36:22.711305] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.001 [2024-06-10 11:36:22.711319] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:39.001 [2024-06-10 11:36:22.711379] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.001 [2024-06-10 11:36:22.711392] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:39.001 #29 NEW cov: 12045 ft: 13814 corp: 13/351b lim: 35 exec/s: 0 rss: 73Mb L: 27/35 MS: 1 CopyPart- 00:25:39.001 [2024-06-10 11:36:22.761470] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.001 [2024-06-10 11:36:22.761495] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:39.001 [2024-06-10 11:36:22.761551] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.001 [2024-06-10 11:36:22.761569] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:39.001 [2024-06-10 11:36:22.761625] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.001 [2024-06-10 11:36:22.761638] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:39.001 [2024-06-10 11:36:22.761695] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.001 [2024-06-10 11:36:22.761709] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:39.002 NEW_FUNC[1/1]: 0x1a77150 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:601 00:25:39.002 #30 NEW cov: 12068 ft: 13881 corp: 14/382b lim: 35 exec/s: 0 rss: 73Mb L: 31/35 MS: 1 InsertRepeatedBytes- 00:25:39.002 [2024-06-10 11:36:22.801349] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.002 [2024-06-10 11:36:22.801373] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:39.002 [2024-06-10 11:36:22.801445] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.002 [2024-06-10 11:36:22.801459] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:39.002 #31 NEW cov: 12068 ft: 14154 corp: 15/399b lim: 35 exec/s: 0 rss: 73Mb L: 17/35 MS: 1 CrossOver- 00:25:39.002 [2024-06-10 11:36:22.841584] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000400 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.002 [2024-06-10 11:36:22.841609] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:39.002 [2024-06-10 11:36:22.841667] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.002 [2024-06-10 11:36:22.841680] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:39.002 [2024-06-10 11:36:22.841734] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000700 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.002 [2024-06-10 11:36:22.841748] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:39.002 #32 NEW cov: 12068 ft: 14161 corp: 16/425b lim: 35 exec/s: 0 rss: 73Mb L: 26/35 MS: 1 ChangeBinInt- 00:25:39.002 [2024-06-10 11:36:22.891736] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.002 [2024-06-10 11:36:22.891761] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:39.002 [2024-06-10 11:36:22.891821] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.002 [2024-06-10 11:36:22.891834] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:39.002 [2024-06-10 11:36:22.891905] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.002 [2024-06-10 11:36:22.891919] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:39.002 #33 NEW cov: 12068 ft: 14185 corp: 17/451b lim: 35 exec/s: 33 rss: 73Mb L: 26/35 MS: 1 ShuffleBytes- 00:25:39.002 [2024-06-10 11:36:22.931712] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.002 [2024-06-10 11:36:22.931740] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:39.002 [2024-06-10 11:36:22.931796] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.002 [2024-06-10 11:36:22.931810] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:39.261 #34 NEW cov: 12068 ft: 14241 corp: 18/471b lim: 35 exec/s: 34 rss: 73Mb L: 20/35 MS: 1 EraseBytes- 00:25:39.261 [2024-06-10 11:36:22.972234] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.261 [2024-06-10 11:36:22.972263] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:39.261 [2024-06-10 11:36:22.972320] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000000c6 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.261 [2024-06-10 11:36:22.972334] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:39.261 [2024-06-10 11:36:22.972391] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.261 [2024-06-10 11:36:22.972404] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:39.261 [2024-06-10 11:36:22.972461] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.261 [2024-06-10 11:36:22.972474] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:39.261 [2024-06-10 11:36:22.972529] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:8 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.261 [2024-06-10 11:36:22.972542] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:39.261 #35 NEW cov: 12068 ft: 14314 corp: 19/506b lim: 35 exec/s: 35 rss: 73Mb L: 35/35 MS: 1 ChangeByte- 00:25:39.261 [2024-06-10 11:36:23.021983] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.261 [2024-06-10 11:36:23.022007] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:39.261 [2024-06-10 11:36:23.022063] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.261 [2024-06-10 11:36:23.022076] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:39.261 #36 NEW cov: 12068 ft: 14358 corp: 20/526b lim: 35 exec/s: 36 rss: 73Mb L: 20/35 MS: 1 ChangeBinInt- 00:25:39.261 [2024-06-10 11:36:23.072251] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.261 [2024-06-10 11:36:23.072291] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:39.261 [2024-06-10 11:36:23.072348] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.261 [2024-06-10 11:36:23.072362] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:39.261 [2024-06-10 11:36:23.072417] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.261 [2024-06-10 11:36:23.072431] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:39.261 #37 NEW cov: 12068 ft: 14435 corp: 21/553b lim: 35 exec/s: 37 rss: 73Mb L: 27/35 MS: 1 CopyPart- 00:25:39.261 [2024-06-10 11:36:23.122287] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.261 [2024-06-10 11:36:23.122312] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:39.261 [2024-06-10 11:36:23.122370] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.261 [2024-06-10 11:36:23.122384] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:39.261 #38 NEW cov: 12068 ft: 14456 corp: 22/570b lim: 35 exec/s: 38 rss: 73Mb L: 17/35 MS: 1 ShuffleBytes- 00:25:39.261 [2024-06-10 11:36:23.172525] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000400 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.261 [2024-06-10 11:36:23.172550] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:39.261 [2024-06-10 11:36:23.172610] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.261 [2024-06-10 11:36:23.172624] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:39.261 [2024-06-10 11:36:23.172694] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.261 [2024-06-10 11:36:23.172708] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:39.261 #39 NEW cov: 12068 ft: 14490 corp: 23/596b lim: 35 exec/s: 39 rss: 73Mb L: 26/35 MS: 1 ShuffleBytes- 00:25:39.519 [2024-06-10 11:36:23.212929] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.519 [2024-06-10 11:36:23.212954] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:39.519 [2024-06-10 11:36:23.213014] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.519 [2024-06-10 11:36:23.213028] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:39.519 [2024-06-10 11:36:23.213084] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.519 [2024-06-10 11:36:23.213097] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:39.519 [2024-06-10 11:36:23.213153] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.519 [2024-06-10 11:36:23.213167] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:39.519 [2024-06-10 11:36:23.213224] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:8 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.519 [2024-06-10 11:36:23.213238] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:39.519 #40 NEW cov: 12068 ft: 14500 corp: 24/631b lim: 35 exec/s: 40 rss: 73Mb L: 35/35 MS: 1 InsertRepeatedBytes- 00:25:39.519 [2024-06-10 11:36:23.252664] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.519 [2024-06-10 11:36:23.252689] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:39.519 NEW_FUNC[1/1]: 0x4b8410 in feat_write_atomicity /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:340 00:25:39.519 #41 NEW cov: 12082 ft: 14678 corp: 25/649b lim: 35 exec/s: 41 rss: 73Mb L: 18/35 MS: 1 CrossOver- 00:25:39.519 [2024-06-10 11:36:23.292869] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.519 [2024-06-10 11:36:23.292894] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:39.519 [2024-06-10 11:36:23.292968] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.519 [2024-06-10 11:36:23.292982] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:39.519 [2024-06-10 11:36:23.293040] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.519 [2024-06-10 11:36:23.293054] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:39.519 #42 NEW cov: 12082 ft: 14714 corp: 26/671b lim: 35 exec/s: 42 rss: 73Mb L: 22/35 MS: 1 EraseBytes- 00:25:39.519 [2024-06-10 11:36:23.333123] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.519 [2024-06-10 11:36:23.333150] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:39.519 [2024-06-10 11:36:23.333224] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.519 [2024-06-10 11:36:23.333238] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:39.519 [2024-06-10 11:36:23.333301] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.519 [2024-06-10 11:36:23.333315] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:39.519 [2024-06-10 11:36:23.333372] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.519 [2024-06-10 11:36:23.333386] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:39.519 #43 NEW cov: 12082 ft: 14778 corp: 27/703b lim: 35 exec/s: 43 rss: 74Mb L: 32/35 MS: 1 InsertByte- 00:25:39.519 [2024-06-10 11:36:23.383132] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.519 [2024-06-10 11:36:23.383158] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:39.519 [2024-06-10 11:36:23.383231] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.519 [2024-06-10 11:36:23.383251] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:39.519 [2024-06-10 11:36:23.383310] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.519 [2024-06-10 11:36:23.383324] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:39.519 #44 NEW cov: 12082 ft: 14811 corp: 28/729b lim: 35 exec/s: 44 rss: 74Mb L: 26/35 MS: 1 ChangeBinInt- 00:25:39.519 [2024-06-10 11:36:23.423225] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.519 [2024-06-10 11:36:23.423254] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:39.519 [2024-06-10 11:36:23.423313] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.519 [2024-06-10 11:36:23.423327] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:39.519 [2024-06-10 11:36:23.423401] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000200 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.520 [2024-06-10 11:36:23.423415] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:39.520 #45 NEW cov: 12082 ft: 14819 corp: 29/756b lim: 35 exec/s: 45 rss: 74Mb L: 27/35 MS: 1 ChangeBit- 00:25:39.520 [2024-06-10 11:36:23.463215] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.520 [2024-06-10 11:36:23.463241] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:39.520 [2024-06-10 11:36:23.463305] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000700 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.520 [2024-06-10 11:36:23.463319] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:39.778 #46 NEW cov: 12082 ft: 14879 corp: 30/773b lim: 35 exec/s: 46 rss: 74Mb L: 17/35 MS: 1 ChangeBinInt- 00:25:39.778 [2024-06-10 11:36:23.503492] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.778 [2024-06-10 11:36:23.503520] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:39.778 [2024-06-10 11:36:23.503580] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.778 [2024-06-10 11:36:23.503595] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:39.778 [2024-06-10 11:36:23.503648] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.778 [2024-06-10 11:36:23.503661] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:39.778 #47 NEW cov: 12082 ft: 14881 corp: 31/796b lim: 35 exec/s: 47 rss: 74Mb L: 23/35 MS: 1 EraseBytes- 00:25:39.778 [2024-06-10 11:36:23.553609] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000400 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.778 [2024-06-10 11:36:23.553635] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:39.778 [2024-06-10 11:36:23.553695] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.778 [2024-06-10 11:36:23.553709] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:39.778 [2024-06-10 11:36:23.553764] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.778 [2024-06-10 11:36:23.553777] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:39.778 #48 NEW cov: 12082 ft: 14899 corp: 32/823b lim: 35 exec/s: 48 rss: 74Mb L: 27/35 MS: 1 InsertByte- 00:25:39.778 [2024-06-10 11:36:23.603827] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.778 [2024-06-10 11:36:23.603852] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:39.778 [2024-06-10 11:36:23.603910] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.778 [2024-06-10 11:36:23.603924] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:39.778 [2024-06-10 11:36:23.603983] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.778 [2024-06-10 11:36:23.604000] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:39.778 [2024-06-10 11:36:23.604056] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.778 [2024-06-10 11:36:23.604070] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:39.778 #49 NEW cov: 12082 ft: 14945 corp: 33/855b lim: 35 exec/s: 49 rss: 74Mb L: 32/35 MS: 1 InsertRepeatedBytes- 00:25:39.778 [2024-06-10 11:36:23.643971] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.778 [2024-06-10 11:36:23.643997] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:39.778 [2024-06-10 11:36:23.644055] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.778 [2024-06-10 11:36:23.644070] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:39.778 [2024-06-10 11:36:23.644128] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.778 [2024-06-10 11:36:23.644143] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:39.778 [2024-06-10 11:36:23.644199] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.778 [2024-06-10 11:36:23.644212] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:39.778 #50 NEW cov: 12082 ft: 14955 corp: 34/883b lim: 35 exec/s: 50 rss: 74Mb L: 28/35 MS: 1 CMP- DE: "\377\036"- 00:25:39.778 [2024-06-10 11:36:23.693866] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.778 [2024-06-10 11:36:23.693892] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:39.778 [2024-06-10 11:36:23.693952] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.778 [2024-06-10 11:36:23.693966] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:40.037 #51 NEW cov: 12082 ft: 14995 corp: 35/903b lim: 35 exec/s: 51 rss: 74Mb L: 20/35 MS: 1 ChangeBit- 00:25:40.037 [2024-06-10 11:36:23.744266] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.037 [2024-06-10 11:36:23.744292] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:40.037 [2024-06-10 11:36:23.744352] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.037 [2024-06-10 11:36:23.744366] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:40.037 [2024-06-10 11:36:23.744425] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.037 [2024-06-10 11:36:23.744438] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:40.037 [2024-06-10 11:36:23.744493] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.037 [2024-06-10 11:36:23.744507] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:40.037 #52 NEW cov: 12082 ft: 15001 corp: 36/935b lim: 35 exec/s: 52 rss: 74Mb L: 32/35 MS: 1 ChangeBinInt- 00:25:40.037 [2024-06-10 11:36:23.794290] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.037 [2024-06-10 11:36:23.794315] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:40.037 [2024-06-10 11:36:23.794371] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.037 [2024-06-10 11:36:23.794385] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:40.037 [2024-06-10 11:36:23.794442] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.037 [2024-06-10 11:36:23.794456] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:40.037 #53 NEW cov: 12082 ft: 15013 corp: 37/961b lim: 35 exec/s: 53 rss: 74Mb L: 26/35 MS: 1 ChangeBit- 00:25:40.037 [2024-06-10 11:36:23.834375] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.037 [2024-06-10 11:36:23.834401] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:40.037 [2024-06-10 11:36:23.834458] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.037 [2024-06-10 11:36:23.834472] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:40.037 [2024-06-10 11:36:23.834528] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.037 [2024-06-10 11:36:23.834542] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:40.037 #54 NEW cov: 12082 ft: 15025 corp: 38/988b lim: 35 exec/s: 54 rss: 74Mb L: 27/35 MS: 1 CMP- DE: "\377\377\377\377"- 00:25:40.037 [2024-06-10 11:36:23.874635] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.037 [2024-06-10 11:36:23.874662] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:40.037 [2024-06-10 11:36:23.874723] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.037 [2024-06-10 11:36:23.874737] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:40.037 [2024-06-10 11:36:23.874796] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.037 [2024-06-10 11:36:23.874810] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:40.037 [2024-06-10 11:36:23.874867] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.037 [2024-06-10 11:36:23.874881] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:40.037 #55 NEW cov: 12082 ft: 15062 corp: 39/1022b lim: 35 exec/s: 27 rss: 74Mb L: 34/35 MS: 1 CrossOver- 00:25:40.037 #55 DONE cov: 12082 ft: 15062 corp: 39/1022b lim: 35 exec/s: 27 rss: 74Mb 00:25:40.037 ###### Recommended dictionary. ###### 00:25:40.037 "\377\036" # Uses: 0 00:25:40.037 "\377\377\377\377" # Uses: 0 00:25:40.037 ###### End of recommended dictionary. ###### 00:25:40.037 Done 55 runs in 2 second(s) 00:25:40.297 11:36:24 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_15.conf /var/tmp/suppress_nvmf_fuzz 00:25:40.297 11:36:24 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:25:40.297 11:36:24 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:25:40.297 11:36:24 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 16 1 0x1 00:25:40.297 11:36:24 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=16 00:25:40.297 11:36:24 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:25:40.297 11:36:24 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:25:40.297 11:36:24 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 00:25:40.297 11:36:24 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_16.conf 00:25:40.297 11:36:24 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:25:40.297 11:36:24 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:25:40.297 11:36:24 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 16 00:25:40.297 11:36:24 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4416 00:25:40.297 11:36:24 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 00:25:40.297 11:36:24 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4416' 00:25:40.297 11:36:24 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4416"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:25:40.297 11:36:24 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:25:40.297 11:36:24 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:25:40.297 11:36:24 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4416' -c /tmp/fuzz_json_16.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 -Z 16 00:25:40.297 [2024-06-10 11:36:24.099994] Starting SPDK v24.09-pre git sha1 a3f6419f1 / DPDK 24.03.0 initialization... 00:25:40.297 [2024-06-10 11:36:24.100076] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3334669 ] 00:25:40.297 EAL: No free 2048 kB hugepages reported on node 1 00:25:40.555 [2024-06-10 11:36:24.430200] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:40.814 [2024-06-10 11:36:24.522085] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:25:40.814 [2024-06-10 11:36:24.582404] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:40.814 [2024-06-10 11:36:24.598658] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4416 *** 00:25:40.814 INFO: Running with entropic power schedule (0xFF, 100). 00:25:40.814 INFO: Seed: 3220140865 00:25:40.814 INFO: Loaded 1 modules (357716 inline 8-bit counters): 357716 [0x29a784c, 0x29feda0), 00:25:40.814 INFO: Loaded 1 PC tables (357716 PCs): 357716 [0x29feda0,0x2f742e0), 00:25:40.814 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 00:25:40.814 INFO: A corpus is not provided, starting from an empty corpus 00:25:40.814 #2 INITED exec/s: 0 rss: 65Mb 00:25:40.814 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:25:40.814 This may also happen if the target rejected all inputs we tried so far 00:25:40.814 [2024-06-10 11:36:24.654114] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.814 [2024-06-10 11:36:24.654146] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:40.814 [2024-06-10 11:36:24.654183] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.814 [2024-06-10 11:36:24.654198] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:40.814 [2024-06-10 11:36:24.654256] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.814 [2024-06-10 11:36:24.654275] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:25:40.814 [2024-06-10 11:36:24.654325] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.814 [2024-06-10 11:36:24.654340] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:25:41.073 NEW_FUNC[1/686]: 0x499940 in fuzz_nvm_read_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:519 00:25:41.073 NEW_FUNC[2/686]: 0x4bf580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:25:41.073 #6 NEW cov: 11927 ft: 11922 corp: 2/93b lim: 105 exec/s: 0 rss: 71Mb L: 92/92 MS: 4 ChangeBit-CopyPart-ShuffleBytes-InsertRepeatedBytes- 00:25:41.073 [2024-06-10 11:36:24.985035] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.073 [2024-06-10 11:36:24.985078] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:41.073 [2024-06-10 11:36:24.985136] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.073 [2024-06-10 11:36:24.985153] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:41.073 [2024-06-10 11:36:24.985206] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.073 [2024-06-10 11:36:24.985220] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:25:41.073 [2024-06-10 11:36:24.985276] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.073 [2024-06-10 11:36:24.985292] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:25:41.332 NEW_FUNC[1/1]: 0x1880c80 in nvme_tcp_read_data /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h:412 00:25:41.332 #12 NEW cov: 12058 ft: 12452 corp: 3/185b lim: 105 exec/s: 0 rss: 72Mb L: 92/92 MS: 1 ChangeByte- 00:25:41.332 [2024-06-10 11:36:25.045085] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.332 [2024-06-10 11:36:25.045117] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:41.332 [2024-06-10 11:36:25.045157] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.332 [2024-06-10 11:36:25.045173] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:41.332 [2024-06-10 11:36:25.045224] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.332 [2024-06-10 11:36:25.045240] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:25:41.332 [2024-06-10 11:36:25.045300] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.332 [2024-06-10 11:36:25.045316] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:25:41.332 #13 NEW cov: 12064 ft: 12875 corp: 4/278b lim: 105 exec/s: 0 rss: 72Mb L: 93/93 MS: 1 InsertByte- 00:25:41.332 [2024-06-10 11:36:25.085174] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.332 [2024-06-10 11:36:25.085207] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:41.332 [2024-06-10 11:36:25.085250] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.332 [2024-06-10 11:36:25.085265] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:41.332 [2024-06-10 11:36:25.085320] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.332 [2024-06-10 11:36:25.085336] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:25:41.332 [2024-06-10 11:36:25.085387] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.332 [2024-06-10 11:36:25.085402] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:25:41.332 #19 NEW cov: 12149 ft: 13078 corp: 5/377b lim: 105 exec/s: 0 rss: 72Mb L: 99/99 MS: 1 CopyPart- 00:25:41.332 [2024-06-10 11:36:25.135363] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.332 [2024-06-10 11:36:25.135390] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:41.332 [2024-06-10 11:36:25.135440] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.332 [2024-06-10 11:36:25.135456] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:41.332 [2024-06-10 11:36:25.135511] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.332 [2024-06-10 11:36:25.135527] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:25:41.332 [2024-06-10 11:36:25.135582] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.332 [2024-06-10 11:36:25.135598] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:25:41.333 #20 NEW cov: 12149 ft: 13167 corp: 6/470b lim: 105 exec/s: 0 rss: 72Mb L: 93/99 MS: 1 CrossOver- 00:25:41.333 [2024-06-10 11:36:25.175478] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.333 [2024-06-10 11:36:25.175507] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:41.333 [2024-06-10 11:36:25.175556] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.333 [2024-06-10 11:36:25.175573] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:41.333 [2024-06-10 11:36:25.175630] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.333 [2024-06-10 11:36:25.175646] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:25:41.333 [2024-06-10 11:36:25.175701] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.333 [2024-06-10 11:36:25.175717] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:25:41.333 #21 NEW cov: 12149 ft: 13271 corp: 7/567b lim: 105 exec/s: 0 rss: 72Mb L: 97/99 MS: 1 InsertRepeatedBytes- 00:25:41.333 [2024-06-10 11:36:25.215569] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.333 [2024-06-10 11:36:25.215596] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:41.333 [2024-06-10 11:36:25.215646] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.333 [2024-06-10 11:36:25.215662] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:41.333 [2024-06-10 11:36:25.215720] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.333 [2024-06-10 11:36:25.215735] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:25:41.333 [2024-06-10 11:36:25.215791] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.333 [2024-06-10 11:36:25.215806] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:25:41.333 #22 NEW cov: 12149 ft: 13353 corp: 8/660b lim: 105 exec/s: 0 rss: 72Mb L: 93/99 MS: 1 CopyPart- 00:25:41.333 [2024-06-10 11:36:25.265700] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.333 [2024-06-10 11:36:25.265727] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:41.333 [2024-06-10 11:36:25.265781] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.333 [2024-06-10 11:36:25.265797] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:41.333 [2024-06-10 11:36:25.265868] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.333 [2024-06-10 11:36:25.265885] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:25:41.333 [2024-06-10 11:36:25.265941] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.333 [2024-06-10 11:36:25.265955] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:25:41.592 #23 NEW cov: 12149 ft: 13395 corp: 9/753b lim: 105 exec/s: 0 rss: 72Mb L: 93/99 MS: 1 ShuffleBytes- 00:25:41.592 [2024-06-10 11:36:25.315837] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.592 [2024-06-10 11:36:25.315864] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:41.592 [2024-06-10 11:36:25.315914] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.592 [2024-06-10 11:36:25.315929] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:41.592 [2024-06-10 11:36:25.315981] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.592 [2024-06-10 11:36:25.315997] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:25:41.592 [2024-06-10 11:36:25.316052] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.592 [2024-06-10 11:36:25.316067] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:25:41.592 #24 NEW cov: 12149 ft: 13435 corp: 10/845b lim: 105 exec/s: 0 rss: 72Mb L: 92/99 MS: 1 ChangeBinInt- 00:25:41.592 [2024-06-10 11:36:25.355966] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.592 [2024-06-10 11:36:25.355993] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:41.592 [2024-06-10 11:36:25.356044] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.592 [2024-06-10 11:36:25.356060] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:41.592 [2024-06-10 11:36:25.356110] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.592 [2024-06-10 11:36:25.356126] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:25:41.592 [2024-06-10 11:36:25.356179] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.592 [2024-06-10 11:36:25.356195] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:25:41.592 #25 NEW cov: 12149 ft: 13483 corp: 11/938b lim: 105 exec/s: 0 rss: 72Mb L: 93/99 MS: 1 ChangeBit- 00:25:41.592 [2024-06-10 11:36:25.396077] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.592 [2024-06-10 11:36:25.396105] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:41.592 [2024-06-10 11:36:25.396158] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.592 [2024-06-10 11:36:25.396175] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:41.592 [2024-06-10 11:36:25.396227] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.592 [2024-06-10 11:36:25.396241] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:25:41.592 [2024-06-10 11:36:25.396301] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.592 [2024-06-10 11:36:25.396316] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:25:41.592 #26 NEW cov: 12149 ft: 13517 corp: 12/1038b lim: 105 exec/s: 0 rss: 72Mb L: 100/100 MS: 1 InsertRepeatedBytes- 00:25:41.592 [2024-06-10 11:36:25.446222] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.592 [2024-06-10 11:36:25.446254] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:41.592 [2024-06-10 11:36:25.446306] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.592 [2024-06-10 11:36:25.446323] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:41.592 [2024-06-10 11:36:25.446377] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.592 [2024-06-10 11:36:25.446393] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:25:41.592 [2024-06-10 11:36:25.446449] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.592 [2024-06-10 11:36:25.446468] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:25:41.592 #27 NEW cov: 12149 ft: 13530 corp: 13/1136b lim: 105 exec/s: 0 rss: 72Mb L: 98/100 MS: 1 InsertRepeatedBytes- 00:25:41.592 [2024-06-10 11:36:25.496404] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.592 [2024-06-10 11:36:25.496433] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:41.592 [2024-06-10 11:36:25.496481] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.592 [2024-06-10 11:36:25.496498] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:41.592 [2024-06-10 11:36:25.496551] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.592 [2024-06-10 11:36:25.496568] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:25:41.592 [2024-06-10 11:36:25.496622] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.592 [2024-06-10 11:36:25.496639] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:25:41.592 NEW_FUNC[1/1]: 0x1a77150 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:601 00:25:41.592 #28 NEW cov: 12172 ft: 13561 corp: 14/1229b lim: 105 exec/s: 0 rss: 73Mb L: 93/100 MS: 1 CMP- DE: "\377~"- 00:25:41.851 [2024-06-10 11:36:25.546592] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.851 [2024-06-10 11:36:25.546620] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:41.851 [2024-06-10 11:36:25.546672] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.851 [2024-06-10 11:36:25.546687] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:41.851 [2024-06-10 11:36:25.546755] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:23808 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.851 [2024-06-10 11:36:25.546771] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:25:41.851 [2024-06-10 11:36:25.546824] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.851 [2024-06-10 11:36:25.546838] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:25:41.851 #29 NEW cov: 12172 ft: 13577 corp: 15/1322b lim: 105 exec/s: 0 rss: 73Mb L: 93/100 MS: 1 ChangeBinInt- 00:25:41.851 [2024-06-10 11:36:25.586629] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.851 [2024-06-10 11:36:25.586656] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:41.851 [2024-06-10 11:36:25.586708] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.851 [2024-06-10 11:36:25.586724] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:41.851 [2024-06-10 11:36:25.586776] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:93 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.851 [2024-06-10 11:36:25.586794] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:25:41.851 [2024-06-10 11:36:25.586849] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.851 [2024-06-10 11:36:25.586865] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:25:41.851 #32 NEW cov: 12172 ft: 13604 corp: 16/1414b lim: 105 exec/s: 0 rss: 73Mb L: 92/100 MS: 3 ChangeBit-ChangeBit-CrossOver- 00:25:41.851 [2024-06-10 11:36:25.626717] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.851 [2024-06-10 11:36:25.626745] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:41.851 [2024-06-10 11:36:25.626798] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.851 [2024-06-10 11:36:25.626815] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:41.851 [2024-06-10 11:36:25.626867] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.851 [2024-06-10 11:36:25.626882] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:25:41.851 [2024-06-10 11:36:25.626935] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.851 [2024-06-10 11:36:25.626950] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:25:41.851 #33 NEW cov: 12172 ft: 13614 corp: 17/1511b lim: 105 exec/s: 33 rss: 73Mb L: 97/100 MS: 1 ShuffleBytes- 00:25:41.851 [2024-06-10 11:36:25.666855] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:4096 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.851 [2024-06-10 11:36:25.666883] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:41.851 [2024-06-10 11:36:25.666937] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.851 [2024-06-10 11:36:25.666953] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:41.851 [2024-06-10 11:36:25.667006] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.851 [2024-06-10 11:36:25.667021] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:25:41.851 [2024-06-10 11:36:25.667073] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.851 [2024-06-10 11:36:25.667088] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:25:41.851 #34 NEW cov: 12172 ft: 13628 corp: 18/1604b lim: 105 exec/s: 34 rss: 73Mb L: 93/100 MS: 1 ChangeBit- 00:25:41.851 [2024-06-10 11:36:25.717019] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.851 [2024-06-10 11:36:25.717047] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:41.851 [2024-06-10 11:36:25.717101] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:3242591731706757120 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.851 [2024-06-10 11:36:25.717116] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:41.852 [2024-06-10 11:36:25.717173] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.852 [2024-06-10 11:36:25.717190] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:25:41.852 [2024-06-10 11:36:25.717243] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:11520 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.852 [2024-06-10 11:36:25.717265] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:25:41.852 #35 NEW cov: 12172 ft: 13660 corp: 19/1707b lim: 105 exec/s: 35 rss: 73Mb L: 103/103 MS: 1 CopyPart- 00:25:41.852 [2024-06-10 11:36:25.757132] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:137438953472 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.852 [2024-06-10 11:36:25.757159] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:41.852 [2024-06-10 11:36:25.757214] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.852 [2024-06-10 11:36:25.757230] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:41.852 [2024-06-10 11:36:25.757288] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.852 [2024-06-10 11:36:25.757303] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:25:41.852 [2024-06-10 11:36:25.757358] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.852 [2024-06-10 11:36:25.757373] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:25:41.852 #36 NEW cov: 12172 ft: 13674 corp: 20/1799b lim: 105 exec/s: 36 rss: 73Mb L: 92/103 MS: 1 ChangeBit- 00:25:41.852 [2024-06-10 11:36:25.797331] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.852 [2024-06-10 11:36:25.797359] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:41.852 [2024-06-10 11:36:25.797414] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.852 [2024-06-10 11:36:25.797431] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:41.852 [2024-06-10 11:36:25.797487] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.852 [2024-06-10 11:36:25.797504] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:25:41.852 [2024-06-10 11:36:25.797564] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.852 [2024-06-10 11:36:25.797580] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:25:42.112 #37 NEW cov: 12172 ft: 13692 corp: 21/1893b lim: 105 exec/s: 37 rss: 73Mb L: 94/103 MS: 1 CopyPart- 00:25:42.112 [2024-06-10 11:36:25.837350] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.112 [2024-06-10 11:36:25.837380] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:42.112 [2024-06-10 11:36:25.837443] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.112 [2024-06-10 11:36:25.837460] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:42.112 [2024-06-10 11:36:25.837520] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:93 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.112 [2024-06-10 11:36:25.837546] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:25:42.112 [2024-06-10 11:36:25.837602] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.112 [2024-06-10 11:36:25.837617] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:25:42.112 #38 NEW cov: 12172 ft: 13707 corp: 22/1985b lim: 105 exec/s: 38 rss: 73Mb L: 92/103 MS: 1 ChangeBit- 00:25:42.112 [2024-06-10 11:36:25.887506] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.112 [2024-06-10 11:36:25.887534] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:42.112 [2024-06-10 11:36:25.887587] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.112 [2024-06-10 11:36:25.887601] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:42.112 [2024-06-10 11:36:25.887656] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.112 [2024-06-10 11:36:25.887672] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:25:42.112 [2024-06-10 11:36:25.887727] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.112 [2024-06-10 11:36:25.887743] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:25:42.112 #39 NEW cov: 12172 ft: 13722 corp: 23/2084b lim: 105 exec/s: 39 rss: 73Mb L: 99/103 MS: 1 ChangeBit- 00:25:42.112 [2024-06-10 11:36:25.937600] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.112 [2024-06-10 11:36:25.937627] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:42.112 [2024-06-10 11:36:25.937681] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.112 [2024-06-10 11:36:25.937696] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:42.112 [2024-06-10 11:36:25.937751] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.112 [2024-06-10 11:36:25.937766] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:25:42.112 [2024-06-10 11:36:25.937821] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.112 [2024-06-10 11:36:25.937837] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:25:42.112 #40 NEW cov: 12172 ft: 13754 corp: 24/2177b lim: 105 exec/s: 40 rss: 73Mb L: 93/103 MS: 1 ShuffleBytes- 00:25:42.112 [2024-06-10 11:36:25.977757] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:65406 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.112 [2024-06-10 11:36:25.977784] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:42.112 [2024-06-10 11:36:25.977834] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.112 [2024-06-10 11:36:25.977854] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:42.112 [2024-06-10 11:36:25.977908] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.112 [2024-06-10 11:36:25.977924] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:25:42.113 [2024-06-10 11:36:25.977980] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.113 [2024-06-10 11:36:25.977995] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:25:42.113 #41 NEW cov: 12172 ft: 13801 corp: 25/2269b lim: 105 exec/s: 41 rss: 73Mb L: 92/103 MS: 1 PersAutoDict- DE: "\377~"- 00:25:42.113 [2024-06-10 11:36:26.017802] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.113 [2024-06-10 11:36:26.017828] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:42.113 [2024-06-10 11:36:26.017893] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.113 [2024-06-10 11:36:26.017909] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:42.113 [2024-06-10 11:36:26.017966] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.113 [2024-06-10 11:36:26.017982] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:25:42.113 [2024-06-10 11:36:26.018038] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.113 [2024-06-10 11:36:26.018054] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:25:42.113 #42 NEW cov: 12172 ft: 13829 corp: 26/2366b lim: 105 exec/s: 42 rss: 73Mb L: 97/103 MS: 1 ChangeBit- 00:25:42.372 [2024-06-10 11:36:26.067946] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.372 [2024-06-10 11:36:26.067974] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:42.372 [2024-06-10 11:36:26.068023] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.372 [2024-06-10 11:36:26.068039] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:42.372 [2024-06-10 11:36:26.068109] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.372 [2024-06-10 11:36:26.068126] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:25:42.372 [2024-06-10 11:36:26.068183] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.372 [2024-06-10 11:36:26.068197] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:25:42.372 #43 NEW cov: 12172 ft: 13854 corp: 27/2466b lim: 105 exec/s: 43 rss: 74Mb L: 100/103 MS: 1 PersAutoDict- DE: "\377~"- 00:25:42.372 [2024-06-10 11:36:26.118135] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.372 [2024-06-10 11:36:26.118162] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:42.372 [2024-06-10 11:36:26.118206] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.372 [2024-06-10 11:36:26.118223] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:42.372 [2024-06-10 11:36:26.118284] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.372 [2024-06-10 11:36:26.118299] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:25:42.372 [2024-06-10 11:36:26.118355] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.372 [2024-06-10 11:36:26.118369] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:25:42.372 #44 NEW cov: 12172 ft: 13874 corp: 28/2563b lim: 105 exec/s: 44 rss: 74Mb L: 97/103 MS: 1 CopyPart- 00:25:42.372 [2024-06-10 11:36:26.168300] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.372 [2024-06-10 11:36:26.168327] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:42.372 [2024-06-10 11:36:26.168380] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.372 [2024-06-10 11:36:26.168397] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:42.372 [2024-06-10 11:36:26.168453] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:227598906949632 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.372 [2024-06-10 11:36:26.168468] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:25:42.372 [2024-06-10 11:36:26.168524] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.372 [2024-06-10 11:36:26.168539] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:25:42.372 #45 NEW cov: 12172 ft: 13882 corp: 29/2661b lim: 105 exec/s: 45 rss: 74Mb L: 98/103 MS: 1 InsertByte- 00:25:42.372 [2024-06-10 11:36:26.208372] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.372 [2024-06-10 11:36:26.208401] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:42.372 [2024-06-10 11:36:26.208450] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.372 [2024-06-10 11:36:26.208467] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:42.373 [2024-06-10 11:36:26.208521] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.373 [2024-06-10 11:36:26.208538] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:25:42.373 [2024-06-10 11:36:26.208594] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.373 [2024-06-10 11:36:26.208610] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:25:42.373 #51 NEW cov: 12172 ft: 13900 corp: 30/2753b lim: 105 exec/s: 51 rss: 74Mb L: 92/103 MS: 1 ChangeBit- 00:25:42.373 [2024-06-10 11:36:26.248489] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.373 [2024-06-10 11:36:26.248522] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:42.373 [2024-06-10 11:36:26.248561] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.373 [2024-06-10 11:36:26.248578] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:42.373 [2024-06-10 11:36:26.248632] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.373 [2024-06-10 11:36:26.248648] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:25:42.373 [2024-06-10 11:36:26.248705] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.373 [2024-06-10 11:36:26.248722] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:25:42.373 #52 NEW cov: 12172 ft: 13950 corp: 31/2853b lim: 105 exec/s: 52 rss: 74Mb L: 100/103 MS: 1 CrossOver- 00:25:42.373 [2024-06-10 11:36:26.288593] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.373 [2024-06-10 11:36:26.288621] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:42.373 [2024-06-10 11:36:26.288668] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.373 [2024-06-10 11:36:26.288684] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:42.373 [2024-06-10 11:36:26.288735] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:93 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.373 [2024-06-10 11:36:26.288751] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:25:42.373 [2024-06-10 11:36:26.288809] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.373 [2024-06-10 11:36:26.288825] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:25:42.632 #53 NEW cov: 12172 ft: 13956 corp: 32/2945b lim: 105 exec/s: 53 rss: 74Mb L: 92/103 MS: 1 ChangeBit- 00:25:42.632 [2024-06-10 11:36:26.338778] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.632 [2024-06-10 11:36:26.338807] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:42.632 [2024-06-10 11:36:26.338872] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.632 [2024-06-10 11:36:26.338888] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:42.632 [2024-06-10 11:36:26.338945] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.632 [2024-06-10 11:36:26.338961] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:25:42.632 [2024-06-10 11:36:26.339018] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:0 len:5 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.632 [2024-06-10 11:36:26.339034] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:25:42.632 #54 NEW cov: 12172 ft: 13974 corp: 33/3038b lim: 105 exec/s: 54 rss: 74Mb L: 93/103 MS: 1 ChangeBit- 00:25:42.632 [2024-06-10 11:36:26.378896] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.632 [2024-06-10 11:36:26.378925] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:42.632 [2024-06-10 11:36:26.378970] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.632 [2024-06-10 11:36:26.378987] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:42.632 [2024-06-10 11:36:26.379041] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.632 [2024-06-10 11:36:26.379058] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:25:42.632 [2024-06-10 11:36:26.379111] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.632 [2024-06-10 11:36:26.379129] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:25:42.632 #55 NEW cov: 12172 ft: 13993 corp: 34/3131b lim: 105 exec/s: 55 rss: 74Mb L: 93/103 MS: 1 CrossOver- 00:25:42.632 [2024-06-10 11:36:26.418949] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.632 [2024-06-10 11:36:26.418976] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:42.632 [2024-06-10 11:36:26.419026] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.632 [2024-06-10 11:36:26.419043] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:42.632 [2024-06-10 11:36:26.419098] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.632 [2024-06-10 11:36:26.419114] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:25:42.632 [2024-06-10 11:36:26.419169] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.632 [2024-06-10 11:36:26.419185] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:25:42.632 #56 NEW cov: 12172 ft: 14019 corp: 35/3231b lim: 105 exec/s: 56 rss: 74Mb L: 100/103 MS: 1 ChangeBit- 00:25:42.632 [2024-06-10 11:36:26.459112] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.632 [2024-06-10 11:36:26.459141] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:42.632 [2024-06-10 11:36:26.459190] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:49 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.632 [2024-06-10 11:36:26.459207] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:42.633 [2024-06-10 11:36:26.459263] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.633 [2024-06-10 11:36:26.459279] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:25:42.633 [2024-06-10 11:36:26.459334] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.633 [2024-06-10 11:36:26.459349] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:25:42.633 #57 NEW cov: 12172 ft: 14026 corp: 36/3331b lim: 105 exec/s: 57 rss: 74Mb L: 100/103 MS: 1 ChangeByte- 00:25:42.633 [2024-06-10 11:36:26.509282] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:16743936 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.633 [2024-06-10 11:36:26.509310] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:42.633 [2024-06-10 11:36:26.509347] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.633 [2024-06-10 11:36:26.509364] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:42.633 [2024-06-10 11:36:26.509417] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.633 [2024-06-10 11:36:26.509431] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:25:42.633 [2024-06-10 11:36:26.509488] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.633 [2024-06-10 11:36:26.509502] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:25:42.633 #58 NEW cov: 12172 ft: 14064 corp: 37/3431b lim: 105 exec/s: 58 rss: 74Mb L: 100/103 MS: 1 PersAutoDict- DE: "\377~"- 00:25:42.633 [2024-06-10 11:36:26.559397] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.633 [2024-06-10 11:36:26.559424] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:42.633 [2024-06-10 11:36:26.559474] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.633 [2024-06-10 11:36:26.559490] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:42.633 [2024-06-10 11:36:26.559547] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:93 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.633 [2024-06-10 11:36:26.559565] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:25:42.633 [2024-06-10 11:36:26.559620] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.633 [2024-06-10 11:36:26.559634] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:25:42.633 #59 NEW cov: 12172 ft: 14070 corp: 38/3523b lim: 105 exec/s: 59 rss: 74Mb L: 92/103 MS: 1 ShuffleBytes- 00:25:42.892 [2024-06-10 11:36:26.599515] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.892 [2024-06-10 11:36:26.599544] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:42.892 [2024-06-10 11:36:26.599592] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.892 [2024-06-10 11:36:26.599608] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:42.892 [2024-06-10 11:36:26.599677] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.892 [2024-06-10 11:36:26.599693] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:25:42.892 [2024-06-10 11:36:26.599748] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.892 [2024-06-10 11:36:26.599768] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:25:42.893 #60 NEW cov: 12172 ft: 14078 corp: 39/3615b lim: 105 exec/s: 30 rss: 74Mb L: 92/103 MS: 1 ChangeByte- 00:25:42.893 #60 DONE cov: 12172 ft: 14078 corp: 39/3615b lim: 105 exec/s: 30 rss: 74Mb 00:25:42.893 ###### Recommended dictionary. ###### 00:25:42.893 "\377~" # Uses: 3 00:25:42.893 ###### End of recommended dictionary. ###### 00:25:42.893 Done 60 runs in 2 second(s) 00:25:42.893 11:36:26 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_16.conf /var/tmp/suppress_nvmf_fuzz 00:25:42.893 11:36:26 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:25:42.893 11:36:26 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:25:42.893 11:36:26 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 17 1 0x1 00:25:42.893 11:36:26 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=17 00:25:42.893 11:36:26 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:25:42.893 11:36:26 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:25:42.893 11:36:26 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 00:25:42.893 11:36:26 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_17.conf 00:25:42.893 11:36:26 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:25:42.893 11:36:26 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:25:42.893 11:36:26 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 17 00:25:42.893 11:36:26 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4417 00:25:42.893 11:36:26 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 00:25:42.893 11:36:26 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4417' 00:25:42.893 11:36:26 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4417"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:25:42.893 11:36:26 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:25:42.893 11:36:26 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:25:42.893 11:36:26 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4417' -c /tmp/fuzz_json_17.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 -Z 17 00:25:42.893 [2024-06-10 11:36:26.823942] Starting SPDK v24.09-pre git sha1 a3f6419f1 / DPDK 24.03.0 initialization... 00:25:42.893 [2024-06-10 11:36:26.824026] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3335031 ] 00:25:43.152 EAL: No free 2048 kB hugepages reported on node 1 00:25:43.411 [2024-06-10 11:36:27.153171] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:43.411 [2024-06-10 11:36:27.244910] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:25:43.411 [2024-06-10 11:36:27.305129] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:43.411 [2024-06-10 11:36:27.321373] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4417 *** 00:25:43.411 INFO: Running with entropic power schedule (0xFF, 100). 00:25:43.411 INFO: Seed: 1645168785 00:25:43.411 INFO: Loaded 1 modules (357716 inline 8-bit counters): 357716 [0x29a784c, 0x29feda0), 00:25:43.411 INFO: Loaded 1 PC tables (357716 PCs): 357716 [0x29feda0,0x2f742e0), 00:25:43.411 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 00:25:43.411 INFO: A corpus is not provided, starting from an empty corpus 00:25:43.411 #2 INITED exec/s: 0 rss: 65Mb 00:25:43.411 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:25:43.411 This may also happen if the target rejected all inputs we tried so far 00:25:43.670 [2024-06-10 11:36:27.369878] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:2097865013646990621 len:7454 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.670 [2024-06-10 11:36:27.369910] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:43.670 [2024-06-10 11:36:27.369963] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:2097865012304223517 len:7454 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.670 [2024-06-10 11:36:27.369979] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:43.671 [2024-06-10 11:36:27.370033] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:2097865012304223517 len:7454 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.671 [2024-06-10 11:36:27.370065] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:25:43.930 NEW_FUNC[1/688]: 0x49ccc0 in fuzz_nvm_write_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:540 00:25:43.930 NEW_FUNC[2/688]: 0x4bf580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:25:43.930 #22 NEW cov: 11948 ft: 11950 corp: 2/92b lim: 120 exec/s: 0 rss: 72Mb L: 91/91 MS: 5 ChangeBit-ChangeByte-CopyPart-ChangeByte-InsertRepeatedBytes- 00:25:43.930 [2024-06-10 11:36:27.710703] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:2097924387274890525 len:7454 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.930 [2024-06-10 11:36:27.710745] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:43.930 [2024-06-10 11:36:27.710799] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:2097865012304223517 len:7454 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.930 [2024-06-10 11:36:27.710816] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:43.930 [2024-06-10 11:36:27.710868] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:2097865012304223517 len:7454 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.930 [2024-06-10 11:36:27.710899] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:25:43.930 #23 NEW cov: 12079 ft: 12608 corp: 3/183b lim: 120 exec/s: 0 rss: 72Mb L: 91/91 MS: 1 ChangeByte- 00:25:43.930 [2024-06-10 11:36:27.760877] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:2097865013646990621 len:7454 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.930 [2024-06-10 11:36:27.760907] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:43.930 [2024-06-10 11:36:27.760948] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:488439808 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.930 [2024-06-10 11:36:27.760964] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:43.930 [2024-06-10 11:36:27.761018] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:32010391257088 len:7454 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.930 [2024-06-10 11:36:27.761036] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:25:43.930 [2024-06-10 11:36:27.761088] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:2097865012304223517 len:7454 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.930 [2024-06-10 11:36:27.761105] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:25:43.930 #24 NEW cov: 12085 ft: 13153 corp: 4/302b lim: 120 exec/s: 0 rss: 72Mb L: 119/119 MS: 1 InsertRepeatedBytes- 00:25:43.930 [2024-06-10 11:36:27.800820] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:2097924387274890525 len:7454 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.930 [2024-06-10 11:36:27.800849] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:43.930 [2024-06-10 11:36:27.800889] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:2097865012304223517 len:7454 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.930 [2024-06-10 11:36:27.800906] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:43.930 [2024-06-10 11:36:27.800961] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:2097865012304223517 len:7454 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.930 [2024-06-10 11:36:27.800978] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:25:43.930 #25 NEW cov: 12170 ft: 13490 corp: 5/394b lim: 120 exec/s: 0 rss: 72Mb L: 92/119 MS: 1 InsertByte- 00:25:43.931 [2024-06-10 11:36:27.851097] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:2097865013646990621 len:7454 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.931 [2024-06-10 11:36:27.851125] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:43.931 [2024-06-10 11:36:27.851174] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:2097865012304223517 len:7454 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.931 [2024-06-10 11:36:27.851190] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:43.931 [2024-06-10 11:36:27.851243] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:8608480567731124087 len:30584 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.931 [2024-06-10 11:36:27.851263] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:25:43.931 [2024-06-10 11:36:27.851316] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:2097865012304223517 len:7454 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.931 [2024-06-10 11:36:27.851331] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:25:43.931 #26 NEW cov: 12170 ft: 13620 corp: 6/513b lim: 120 exec/s: 0 rss: 72Mb L: 119/119 MS: 1 InsertRepeatedBytes- 00:25:44.190 [2024-06-10 11:36:27.891091] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:2097924387274890525 len:7454 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.190 [2024-06-10 11:36:27.891120] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:44.190 [2024-06-10 11:36:27.891157] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:2097865012304223517 len:7454 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.190 [2024-06-10 11:36:27.891172] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:44.190 [2024-06-10 11:36:27.891227] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:2097865012304223517 len:7454 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.190 [2024-06-10 11:36:27.891244] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:25:44.190 #27 NEW cov: 12170 ft: 13684 corp: 7/604b lim: 120 exec/s: 0 rss: 72Mb L: 91/119 MS: 1 ChangeBit- 00:25:44.190 [2024-06-10 11:36:27.931333] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:4557430890996645695 len:16192 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.190 [2024-06-10 11:36:27.931363] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:44.190 [2024-06-10 11:36:27.931402] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:4557430888798830399 len:16192 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.190 [2024-06-10 11:36:27.931418] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:44.190 [2024-06-10 11:36:27.931471] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:4557430888798830399 len:16192 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.190 [2024-06-10 11:36:27.931486] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:25:44.190 [2024-06-10 11:36:27.931539] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:4557430888798830399 len:16192 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.190 [2024-06-10 11:36:27.931553] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:25:44.190 #32 NEW cov: 12170 ft: 13812 corp: 8/711b lim: 120 exec/s: 0 rss: 72Mb L: 107/119 MS: 5 ChangeBit-ChangeBit-ShuffleBytes-ChangeBit-InsertRepeatedBytes- 00:25:44.190 [2024-06-10 11:36:27.971630] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:2097865013646990621 len:7454 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.190 [2024-06-10 11:36:27.971658] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:44.190 [2024-06-10 11:36:27.971712] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:2097865012304223517 len:7454 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.190 [2024-06-10 11:36:27.971729] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:44.190 [2024-06-10 11:36:27.971797] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:8608480567731124087 len:30584 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.190 [2024-06-10 11:36:27.971813] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:25:44.190 [2024-06-10 11:36:27.971867] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:2097865013814172957 len:7454 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.190 [2024-06-10 11:36:27.971884] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:25:44.190 [2024-06-10 11:36:27.971937] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:0 lba:2097865012304223517 len:7454 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.190 [2024-06-10 11:36:27.971953] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:25:44.190 #33 NEW cov: 12170 ft: 13909 corp: 9/831b lim: 120 exec/s: 0 rss: 72Mb L: 120/120 MS: 1 InsertByte- 00:25:44.190 [2024-06-10 11:36:28.021456] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:2097865011990701597 len:7454 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.191 [2024-06-10 11:36:28.021484] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:44.191 [2024-06-10 11:36:28.021529] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:2097865012304223517 len:7454 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.191 [2024-06-10 11:36:28.021545] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:44.191 [2024-06-10 11:36:28.021599] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:8608480567731124087 len:30584 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.191 [2024-06-10 11:36:28.021613] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:25:44.191 #34 NEW cov: 12170 ft: 13946 corp: 10/919b lim: 120 exec/s: 0 rss: 72Mb L: 88/120 MS: 1 CrossOver- 00:25:44.191 [2024-06-10 11:36:28.061860] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:2097865013646990621 len:7454 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.191 [2024-06-10 11:36:28.061887] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:44.191 [2024-06-10 11:36:28.061942] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:2097865012304223517 len:7454 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.191 [2024-06-10 11:36:28.061958] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:44.191 [2024-06-10 11:36:28.062011] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:8608480567731124087 len:30584 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.191 [2024-06-10 11:36:28.062027] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:25:44.191 [2024-06-10 11:36:28.062077] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:2097865013814172957 len:7454 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.191 [2024-06-10 11:36:28.062093] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:25:44.191 [2024-06-10 11:36:28.062146] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:0 lba:2097865012304223517 len:7454 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.191 [2024-06-10 11:36:28.062162] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:25:44.191 #35 NEW cov: 12170 ft: 13980 corp: 11/1039b lim: 120 exec/s: 0 rss: 72Mb L: 120/120 MS: 1 InsertByte- 00:25:44.191 [2024-06-10 11:36:28.101988] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:2097865013646990621 len:7454 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.191 [2024-06-10 11:36:28.102015] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:44.191 [2024-06-10 11:36:28.102071] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:2097865012304223517 len:7454 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.191 [2024-06-10 11:36:28.102088] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:44.191 [2024-06-10 11:36:28.102141] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:8608480567731124087 len:30584 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.191 [2024-06-10 11:36:28.102157] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:25:44.191 [2024-06-10 11:36:28.102209] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:2097865013814172957 len:7454 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.191 [2024-06-10 11:36:28.102223] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:25:44.191 [2024-06-10 11:36:28.102282] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:0 lba:2097865012304223517 len:7454 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.191 [2024-06-10 11:36:28.102298] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:25:44.191 #36 NEW cov: 12170 ft: 13989 corp: 12/1159b lim: 120 exec/s: 0 rss: 72Mb L: 120/120 MS: 1 ChangeBinInt- 00:25:44.450 [2024-06-10 11:36:28.152071] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:2097865013646990621 len:7454 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.450 [2024-06-10 11:36:28.152103] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:44.450 [2024-06-10 11:36:28.152147] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:2097865012304223517 len:7454 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.450 [2024-06-10 11:36:28.152162] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:44.450 [2024-06-10 11:36:28.152214] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:8608480567731124087 len:30584 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.450 [2024-06-10 11:36:28.152230] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:25:44.450 [2024-06-10 11:36:28.152287] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:2097865013814172957 len:7454 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.450 [2024-06-10 11:36:28.152304] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:25:44.450 [2024-06-10 11:36:28.152356] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:0 lba:2097865012304223517 len:7454 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.450 [2024-06-10 11:36:28.152371] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:25:44.450 #37 NEW cov: 12170 ft: 14016 corp: 13/1279b lim: 120 exec/s: 0 rss: 72Mb L: 120/120 MS: 1 ShuffleBytes- 00:25:44.450 [2024-06-10 11:36:28.202280] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:2097865013646990621 len:7454 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.450 [2024-06-10 11:36:28.202308] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:44.450 [2024-06-10 11:36:28.202363] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:2097872708885617949 len:7454 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.450 [2024-06-10 11:36:28.202376] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:44.450 [2024-06-10 11:36:28.202429] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:8608480567731124087 len:30584 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.450 [2024-06-10 11:36:28.202444] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:25:44.450 [2024-06-10 11:36:28.202495] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:2097865013814172957 len:7454 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.450 [2024-06-10 11:36:28.202510] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:25:44.450 [2024-06-10 11:36:28.202565] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:0 lba:2097865012304223517 len:7454 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.450 [2024-06-10 11:36:28.202581] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:25:44.450 #38 NEW cov: 12170 ft: 14039 corp: 14/1399b lim: 120 exec/s: 0 rss: 73Mb L: 120/120 MS: 1 ChangeBinInt- 00:25:44.450 [2024-06-10 11:36:28.252419] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:2097865013646990621 len:7454 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.450 [2024-06-10 11:36:28.252448] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:44.450 [2024-06-10 11:36:28.252517] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:2097865149743176989 len:7454 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.450 [2024-06-10 11:36:28.252532] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:44.450 [2024-06-10 11:36:28.252590] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:8608480567731124087 len:30584 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.450 [2024-06-10 11:36:28.252606] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:25:44.450 [2024-06-10 11:36:28.252657] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:2097865013814172957 len:7454 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.450 [2024-06-10 11:36:28.252673] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:25:44.450 [2024-06-10 11:36:28.252728] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:0 lba:2097865012304223517 len:7454 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.450 [2024-06-10 11:36:28.252744] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:25:44.450 NEW_FUNC[1/1]: 0x1a77150 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:601 00:25:44.450 #44 NEW cov: 12193 ft: 14133 corp: 15/1519b lim: 120 exec/s: 0 rss: 73Mb L: 120/120 MS: 1 ChangeBit- 00:25:44.450 [2024-06-10 11:36:28.292229] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:2097865011990701597 len:7454 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.450 [2024-06-10 11:36:28.292258] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:44.451 [2024-06-10 11:36:28.292319] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:2097865012304223517 len:7454 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.451 [2024-06-10 11:36:28.292335] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:44.451 [2024-06-10 11:36:28.292368] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:8608480567731124087 len:30584 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.451 [2024-06-10 11:36:28.292384] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:25:44.451 #45 NEW cov: 12193 ft: 14194 corp: 16/1607b lim: 120 exec/s: 0 rss: 73Mb L: 88/120 MS: 1 ShuffleBytes- 00:25:44.451 [2024-06-10 11:36:28.342258] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:2097865013646990621 len:7454 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.451 [2024-06-10 11:36:28.342286] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:44.451 [2024-06-10 11:36:28.342342] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:488439808 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.451 [2024-06-10 11:36:28.342360] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:44.451 #46 NEW cov: 12193 ft: 14546 corp: 17/1677b lim: 120 exec/s: 46 rss: 73Mb L: 70/120 MS: 1 CrossOver- 00:25:44.451 [2024-06-10 11:36:28.392481] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:2097865013646990621 len:7454 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.451 [2024-06-10 11:36:28.392509] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:44.451 [2024-06-10 11:36:28.392573] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:2097865012304223517 len:7454 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.451 [2024-06-10 11:36:28.392590] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:44.451 [2024-06-10 11:36:28.392644] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:2097865012304223517 len:7454 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.451 [2024-06-10 11:36:28.392664] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:25:44.710 #47 NEW cov: 12193 ft: 14559 corp: 18/1769b lim: 120 exec/s: 47 rss: 73Mb L: 92/120 MS: 1 InsertByte- 00:25:44.710 [2024-06-10 11:36:28.432924] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:2097865013646990621 len:7454 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.710 [2024-06-10 11:36:28.432951] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:44.710 [2024-06-10 11:36:28.433003] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:2097865012304223517 len:7454 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.710 [2024-06-10 11:36:28.433020] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:44.710 [2024-06-10 11:36:28.433071] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:8608480568083445623 len:30584 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.710 [2024-06-10 11:36:28.433087] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:25:44.710 [2024-06-10 11:36:28.433137] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:2097865013814172957 len:7454 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.710 [2024-06-10 11:36:28.433151] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:25:44.710 [2024-06-10 11:36:28.433204] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:0 lba:2097865012304223517 len:7454 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.710 [2024-06-10 11:36:28.433218] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:25:44.710 #48 NEW cov: 12193 ft: 14576 corp: 19/1889b lim: 120 exec/s: 48 rss: 73Mb L: 120/120 MS: 1 ChangeBinInt- 00:25:44.711 [2024-06-10 11:36:28.472872] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:2097924387274890525 len:7454 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.711 [2024-06-10 11:36:28.472900] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:44.711 [2024-06-10 11:36:28.472949] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:2097865012304223517 len:7454 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.711 [2024-06-10 11:36:28.472966] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:44.711 [2024-06-10 11:36:28.473020] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:2097865428916051229 len:32383 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.711 [2024-06-10 11:36:28.473037] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:25:44.711 [2024-06-10 11:36:28.473095] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:2097865012304223517 len:7454 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.711 [2024-06-10 11:36:28.473111] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:25:44.711 #49 NEW cov: 12193 ft: 14577 corp: 20/1988b lim: 120 exec/s: 49 rss: 73Mb L: 99/120 MS: 1 InsertRepeatedBytes- 00:25:44.711 [2024-06-10 11:36:28.512553] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744071746617343 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.711 [2024-06-10 11:36:28.512581] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:44.711 #51 NEW cov: 12193 ft: 15465 corp: 21/2021b lim: 120 exec/s: 51 rss: 73Mb L: 33/120 MS: 2 ChangeBit-InsertRepeatedBytes- 00:25:44.711 [2024-06-10 11:36:28.552973] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:2097865013646990621 len:7454 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.711 [2024-06-10 11:36:28.553001] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:44.711 [2024-06-10 11:36:28.553058] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:2097865012304223516 len:7454 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.711 [2024-06-10 11:36:28.553074] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:44.711 [2024-06-10 11:36:28.553128] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:2097865012304223517 len:7454 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.711 [2024-06-10 11:36:28.553142] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:25:44.711 #57 NEW cov: 12193 ft: 15498 corp: 22/2112b lim: 120 exec/s: 57 rss: 73Mb L: 91/120 MS: 1 ChangeBit- 00:25:44.711 [2024-06-10 11:36:28.593066] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:2097865013646990621 len:7454 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.711 [2024-06-10 11:36:28.593094] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:44.711 [2024-06-10 11:36:28.593152] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:2097872708885617949 len:7454 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.711 [2024-06-10 11:36:28.593169] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:44.711 [2024-06-10 11:36:28.593224] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:8608480566472832887 len:30584 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.711 [2024-06-10 11:36:28.593241] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:25:44.711 #58 NEW cov: 12193 ft: 15534 corp: 23/2196b lim: 120 exec/s: 58 rss: 73Mb L: 84/120 MS: 1 CrossOver- 00:25:44.711 [2024-06-10 11:36:28.643405] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744071746617343 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.711 [2024-06-10 11:36:28.643433] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:44.711 [2024-06-10 11:36:28.643497] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.711 [2024-06-10 11:36:28.643514] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:44.711 [2024-06-10 11:36:28.643566] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.711 [2024-06-10 11:36:28.643581] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:25:44.711 [2024-06-10 11:36:28.643633] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.711 [2024-06-10 11:36:28.643649] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:25:44.970 #59 NEW cov: 12193 ft: 15562 corp: 24/2296b lim: 120 exec/s: 59 rss: 73Mb L: 100/120 MS: 1 InsertRepeatedBytes- 00:25:44.970 [2024-06-10 11:36:28.693524] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:2097924387274890525 len:7454 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.970 [2024-06-10 11:36:28.693552] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:44.970 [2024-06-10 11:36:28.693592] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:5353172790017673802 len:7454 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.970 [2024-06-10 11:36:28.693609] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:44.970 [2024-06-10 11:36:28.693662] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:2097865012304223517 len:7454 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.970 [2024-06-10 11:36:28.693679] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:25:44.970 [2024-06-10 11:36:28.693732] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:2097865012304223517 len:7454 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.970 [2024-06-10 11:36:28.693747] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:25:44.970 #60 NEW cov: 12193 ft: 15572 corp: 25/2396b lim: 120 exec/s: 60 rss: 73Mb L: 100/120 MS: 1 InsertRepeatedBytes- 00:25:44.970 [2024-06-10 11:36:28.733483] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:2097865013646990621 len:7454 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.970 [2024-06-10 11:36:28.733512] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:44.970 [2024-06-10 11:36:28.733548] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:2097865012304228381 len:7454 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.970 [2024-06-10 11:36:28.733565] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:44.970 [2024-06-10 11:36:28.733619] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:2097865012304223517 len:7454 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.970 [2024-06-10 11:36:28.733635] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:25:44.970 #61 NEW cov: 12193 ft: 15577 corp: 26/2488b lim: 120 exec/s: 61 rss: 73Mb L: 92/120 MS: 1 ChangeByte- 00:25:44.970 [2024-06-10 11:36:28.783500] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:2097865013646990621 len:7454 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.970 [2024-06-10 11:36:28.783528] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:44.970 [2024-06-10 11:36:28.783563] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:45080465178624 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.970 [2024-06-10 11:36:28.783580] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:44.970 [2024-06-10 11:36:28.833650] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:2097865013646990621 len:7454 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.970 [2024-06-10 11:36:28.833676] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:44.970 [2024-06-10 11:36:28.833730] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:45080465178624 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.970 [2024-06-10 11:36:28.833745] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:44.970 #63 NEW cov: 12193 ft: 15601 corp: 27/2558b lim: 120 exec/s: 63 rss: 73Mb L: 70/120 MS: 2 ChangeByte-ChangeByte- 00:25:44.970 [2024-06-10 11:36:28.873877] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:2097865013646990621 len:7454 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.970 [2024-06-10 11:36:28.873905] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:44.970 [2024-06-10 11:36:28.873965] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:2097872708885617949 len:7454 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.970 [2024-06-10 11:36:28.873983] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:44.970 [2024-06-10 11:36:28.874039] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:8608480566472832887 len:30584 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.970 [2024-06-10 11:36:28.874055] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:25:44.970 #64 NEW cov: 12193 ft: 15612 corp: 28/2642b lim: 120 exec/s: 64 rss: 74Mb L: 84/120 MS: 1 ChangeByte- 00:25:45.230 [2024-06-10 11:36:28.924103] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:2097865013646990605 len:7454 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.230 [2024-06-10 11:36:28.924133] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:45.230 [2024-06-10 11:36:28.924174] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:2097865012304223517 len:7454 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.230 [2024-06-10 11:36:28.924190] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:45.230 [2024-06-10 11:36:28.924253] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:2097865012304223517 len:7454 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.230 [2024-06-10 11:36:28.924270] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:25:45.230 #65 NEW cov: 12193 ft: 15681 corp: 29/2733b lim: 120 exec/s: 65 rss: 74Mb L: 91/120 MS: 1 ChangeBit- 00:25:45.230 [2024-06-10 11:36:28.964302] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:2097865013646990621 len:7454 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.230 [2024-06-10 11:36:28.964338] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:45.230 [2024-06-10 11:36:28.964391] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:2097872708885617949 len:7454 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.230 [2024-06-10 11:36:28.964408] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:45.230 [2024-06-10 11:36:28.964461] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:8608480497753356151 len:26472 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.230 [2024-06-10 11:36:28.964477] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:25:45.230 [2024-06-10 11:36:28.964530] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:8608480567461635959 len:30584 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.230 [2024-06-10 11:36:28.964544] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:25:45.230 #66 NEW cov: 12193 ft: 15690 corp: 30/2837b lim: 120 exec/s: 66 rss: 74Mb L: 104/120 MS: 1 InsertRepeatedBytes- 00:25:45.230 [2024-06-10 11:36:29.003961] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:2097865013646990621 len:7454 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.230 [2024-06-10 11:36:29.003989] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:45.230 #67 NEW cov: 12193 ft: 15724 corp: 31/2879b lim: 120 exec/s: 67 rss: 74Mb L: 42/120 MS: 1 CrossOver- 00:25:45.230 [2024-06-10 11:36:29.044334] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:2097924387274890525 len:7454 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.230 [2024-06-10 11:36:29.044366] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:45.230 [2024-06-10 11:36:29.044402] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:2097865012304223517 len:7454 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.230 [2024-06-10 11:36:29.044418] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:45.230 [2024-06-10 11:36:29.044473] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:2097865012304223517 len:7454 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.230 [2024-06-10 11:36:29.044488] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:25:45.230 #68 NEW cov: 12193 ft: 15751 corp: 32/2970b lim: 120 exec/s: 68 rss: 74Mb L: 91/120 MS: 1 ShuffleBytes- 00:25:45.230 [2024-06-10 11:36:29.084579] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:2097865013646990605 len:7454 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.230 [2024-06-10 11:36:29.084607] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:45.230 [2024-06-10 11:36:29.084656] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:2676586395008836901 len:9502 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.230 [2024-06-10 11:36:29.084672] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:45.230 [2024-06-10 11:36:29.084723] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:2097865012304223517 len:7454 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.230 [2024-06-10 11:36:29.084740] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:25:45.230 [2024-06-10 11:36:29.084791] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:2097865012304223517 len:7454 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.230 [2024-06-10 11:36:29.084807] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:25:45.230 #69 NEW cov: 12193 ft: 15769 corp: 33/3075b lim: 120 exec/s: 69 rss: 74Mb L: 105/120 MS: 1 InsertRepeatedBytes- 00:25:45.230 [2024-06-10 11:36:29.134926] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:2097900198019079453 len:7454 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.230 [2024-06-10 11:36:29.134954] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:45.230 [2024-06-10 11:36:29.135004] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:2097872708885617949 len:7454 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.230 [2024-06-10 11:36:29.135019] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:45.230 [2024-06-10 11:36:29.135070] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:8608480567731124087 len:30584 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.230 [2024-06-10 11:36:29.135087] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:25:45.230 [2024-06-10 11:36:29.135136] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:2097865013814172957 len:7454 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.230 [2024-06-10 11:36:29.135152] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:25:45.230 [2024-06-10 11:36:29.135206] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:0 lba:2097865012304223517 len:7454 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.230 [2024-06-10 11:36:29.135221] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:25:45.230 #70 NEW cov: 12193 ft: 15787 corp: 34/3195b lim: 120 exec/s: 70 rss: 74Mb L: 120/120 MS: 1 ChangeBit- 00:25:45.230 [2024-06-10 11:36:29.174779] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:2097865013646990621 len:7454 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.230 [2024-06-10 11:36:29.174808] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:45.230 [2024-06-10 11:36:29.174854] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:2097865012304223517 len:7454 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.230 [2024-06-10 11:36:29.174871] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:45.230 [2024-06-10 11:36:29.174924] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:2097865012304223517 len:7454 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.230 [2024-06-10 11:36:29.174940] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:25:45.490 [2024-06-10 11:36:29.214883] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:2097865013646990621 len:7454 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.490 [2024-06-10 11:36:29.214911] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:45.490 [2024-06-10 11:36:29.214947] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:2097865012304223517 len:7454 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.490 [2024-06-10 11:36:29.214964] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:45.490 [2024-06-10 11:36:29.215014] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:2097865012304223517 len:7454 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.490 [2024-06-10 11:36:29.215030] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:25:45.490 #72 NEW cov: 12193 ft: 15795 corp: 35/3277b lim: 120 exec/s: 72 rss: 74Mb L: 82/120 MS: 2 ChangeBit-EraseBytes- 00:25:45.490 [2024-06-10 11:36:29.254683] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:2097865013646990621 len:7454 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.490 [2024-06-10 11:36:29.254711] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:45.490 #73 NEW cov: 12193 ft: 15796 corp: 36/3319b lim: 120 exec/s: 73 rss: 74Mb L: 42/120 MS: 1 ChangeByte- 00:25:45.490 [2024-06-10 11:36:29.305302] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:2097865013646990621 len:7454 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.490 [2024-06-10 11:36:29.305328] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:45.490 [2024-06-10 11:36:29.305377] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:2097865012304223517 len:7454 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.490 [2024-06-10 11:36:29.305394] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:45.490 [2024-06-10 11:36:29.305460] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:8608480567731124087 len:30584 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.490 [2024-06-10 11:36:29.305477] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:25:45.490 [2024-06-10 11:36:29.305532] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:2097865012304223517 len:7454 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.490 [2024-06-10 11:36:29.305551] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:25:45.490 #74 NEW cov: 12193 ft: 15799 corp: 37/3438b lim: 120 exec/s: 74 rss: 74Mb L: 119/120 MS: 1 ShuffleBytes- 00:25:45.490 [2024-06-10 11:36:29.345344] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:2097865013646990621 len:7454 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.490 [2024-06-10 11:36:29.345371] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:45.490 [2024-06-10 11:36:29.345418] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:2097865012304223517 len:7454 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.490 [2024-06-10 11:36:29.345435] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:45.490 [2024-06-10 11:36:29.345485] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:2097865012304223517 len:7454 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.490 [2024-06-10 11:36:29.345503] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:25:45.490 #75 NEW cov: 12193 ft: 15815 corp: 38/3530b lim: 120 exec/s: 37 rss: 74Mb L: 92/120 MS: 1 ShuffleBytes- 00:25:45.490 #75 DONE cov: 12193 ft: 15815 corp: 38/3530b lim: 120 exec/s: 37 rss: 74Mb 00:25:45.490 Done 75 runs in 2 second(s) 00:25:45.749 11:36:29 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_17.conf /var/tmp/suppress_nvmf_fuzz 00:25:45.750 11:36:29 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:25:45.750 11:36:29 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:25:45.750 11:36:29 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 18 1 0x1 00:25:45.750 11:36:29 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=18 00:25:45.750 11:36:29 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:25:45.750 11:36:29 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:25:45.750 11:36:29 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 00:25:45.750 11:36:29 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_18.conf 00:25:45.750 11:36:29 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:25:45.750 11:36:29 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:25:45.750 11:36:29 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 18 00:25:45.750 11:36:29 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4418 00:25:45.750 11:36:29 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 00:25:45.750 11:36:29 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4418' 00:25:45.750 11:36:29 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4418"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:25:45.750 11:36:29 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:25:45.750 11:36:29 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:25:45.750 11:36:29 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4418' -c /tmp/fuzz_json_18.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 -Z 18 00:25:45.750 [2024-06-10 11:36:29.556600] Starting SPDK v24.09-pre git sha1 a3f6419f1 / DPDK 24.03.0 initialization... 00:25:45.750 [2024-06-10 11:36:29.556680] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3335390 ] 00:25:45.750 EAL: No free 2048 kB hugepages reported on node 1 00:25:46.009 [2024-06-10 11:36:29.887543] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:46.268 [2024-06-10 11:36:29.987804] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:25:46.268 [2024-06-10 11:36:30.049269] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:46.268 [2024-06-10 11:36:30.065550] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4418 *** 00:25:46.268 INFO: Running with entropic power schedule (0xFF, 100). 00:25:46.268 INFO: Seed: 96234012 00:25:46.268 INFO: Loaded 1 modules (357716 inline 8-bit counters): 357716 [0x29a784c, 0x29feda0), 00:25:46.268 INFO: Loaded 1 PC tables (357716 PCs): 357716 [0x29feda0,0x2f742e0), 00:25:46.268 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 00:25:46.268 INFO: A corpus is not provided, starting from an empty corpus 00:25:46.268 #2 INITED exec/s: 0 rss: 64Mb 00:25:46.268 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:25:46.268 This may also happen if the target rejected all inputs we tried so far 00:25:46.268 [2024-06-10 11:36:30.110278] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:25:46.268 [2024-06-10 11:36:30.110324] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:46.268 [2024-06-10 11:36:30.110376] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:25:46.268 [2024-06-10 11:36:30.110392] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:46.526 NEW_FUNC[1/686]: 0x4a05b0 in fuzz_nvm_write_zeroes_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:562 00:25:46.526 NEW_FUNC[2/686]: 0x4bf580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:25:46.526 #3 NEW cov: 11892 ft: 11893 corp: 2/48b lim: 100 exec/s: 0 rss: 72Mb L: 47/47 MS: 1 InsertRepeatedBytes- 00:25:46.785 [2024-06-10 11:36:30.491220] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:25:46.785 [2024-06-10 11:36:30.491277] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:46.785 [2024-06-10 11:36:30.491341] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:25:46.785 [2024-06-10 11:36:30.491357] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:46.785 [2024-06-10 11:36:30.491386] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:25:46.785 [2024-06-10 11:36:30.491401] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:25:46.785 #15 NEW cov: 12022 ft: 12690 corp: 3/109b lim: 100 exec/s: 0 rss: 72Mb L: 61/61 MS: 2 CrossOver-InsertRepeatedBytes- 00:25:46.785 [2024-06-10 11:36:30.551161] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:25:46.785 [2024-06-10 11:36:30.551196] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:46.785 [2024-06-10 11:36:30.551253] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:25:46.785 [2024-06-10 11:36:30.551270] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:46.785 #16 NEW cov: 12028 ft: 12993 corp: 4/153b lim: 100 exec/s: 0 rss: 72Mb L: 44/61 MS: 1 EraseBytes- 00:25:46.785 [2024-06-10 11:36:30.631447] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:25:46.785 [2024-06-10 11:36:30.631476] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:46.785 [2024-06-10 11:36:30.631523] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:25:46.785 [2024-06-10 11:36:30.631543] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:46.785 [2024-06-10 11:36:30.631573] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:25:46.785 [2024-06-10 11:36:30.631588] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:25:46.785 #17 NEW cov: 12113 ft: 13219 corp: 5/224b lim: 100 exec/s: 0 rss: 72Mb L: 71/71 MS: 1 InsertRepeatedBytes- 00:25:46.785 [2024-06-10 11:36:30.711557] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:25:46.785 [2024-06-10 11:36:30.711585] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:47.044 #23 NEW cov: 12113 ft: 13617 corp: 6/252b lim: 100 exec/s: 0 rss: 72Mb L: 28/71 MS: 1 EraseBytes- 00:25:47.044 [2024-06-10 11:36:30.771711] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:25:47.044 [2024-06-10 11:36:30.771739] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:47.044 #24 NEW cov: 12113 ft: 13704 corp: 7/276b lim: 100 exec/s: 0 rss: 72Mb L: 24/71 MS: 1 EraseBytes- 00:25:47.044 [2024-06-10 11:36:30.851976] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:25:47.044 [2024-06-10 11:36:30.852008] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:47.044 [2024-06-10 11:36:30.852041] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:25:47.044 [2024-06-10 11:36:30.852057] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:47.044 #25 NEW cov: 12113 ft: 13846 corp: 8/318b lim: 100 exec/s: 0 rss: 72Mb L: 42/71 MS: 1 CopyPart- 00:25:47.044 [2024-06-10 11:36:30.912182] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:25:47.044 [2024-06-10 11:36:30.912211] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:47.044 [2024-06-10 11:36:30.912266] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:25:47.044 [2024-06-10 11:36:30.912283] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:47.044 [2024-06-10 11:36:30.912313] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:25:47.044 [2024-06-10 11:36:30.912328] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:25:47.044 #26 NEW cov: 12113 ft: 13874 corp: 9/385b lim: 100 exec/s: 0 rss: 72Mb L: 67/71 MS: 1 InsertRepeatedBytes- 00:25:47.044 [2024-06-10 11:36:30.962308] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:25:47.044 [2024-06-10 11:36:30.962336] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:47.044 [2024-06-10 11:36:30.962383] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:25:47.044 [2024-06-10 11:36:30.962401] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:47.044 [2024-06-10 11:36:30.962430] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:25:47.044 [2024-06-10 11:36:30.962445] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:25:47.303 NEW_FUNC[1/1]: 0x1a77150 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:601 00:25:47.303 #30 NEW cov: 12130 ft: 13894 corp: 10/459b lim: 100 exec/s: 0 rss: 73Mb L: 74/74 MS: 4 ChangeBit-InsertByte-ChangeBit-InsertRepeatedBytes- 00:25:47.303 [2024-06-10 11:36:31.012367] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:25:47.303 [2024-06-10 11:36:31.012396] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:47.303 [2024-06-10 11:36:31.012444] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:25:47.303 [2024-06-10 11:36:31.012460] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:47.303 #31 NEW cov: 12130 ft: 13956 corp: 11/506b lim: 100 exec/s: 0 rss: 73Mb L: 47/74 MS: 1 EraseBytes- 00:25:47.304 [2024-06-10 11:36:31.092686] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:25:47.304 [2024-06-10 11:36:31.092715] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:47.304 [2024-06-10 11:36:31.092747] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:25:47.304 [2024-06-10 11:36:31.092763] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:47.304 [2024-06-10 11:36:31.092792] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:25:47.304 [2024-06-10 11:36:31.092807] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:25:47.304 [2024-06-10 11:36:31.092835] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:25:47.304 [2024-06-10 11:36:31.092849] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:25:47.304 #32 NEW cov: 12130 ft: 14307 corp: 12/595b lim: 100 exec/s: 32 rss: 73Mb L: 89/89 MS: 1 InsertRepeatedBytes- 00:25:47.304 [2024-06-10 11:36:31.172811] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:25:47.304 [2024-06-10 11:36:31.172842] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:47.304 [2024-06-10 11:36:31.172875] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:25:47.304 [2024-06-10 11:36:31.172891] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:47.304 #33 NEW cov: 12130 ft: 14342 corp: 13/640b lim: 100 exec/s: 33 rss: 73Mb L: 45/89 MS: 1 InsertByte- 00:25:47.304 [2024-06-10 11:36:31.232925] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:25:47.304 [2024-06-10 11:36:31.232953] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:47.563 #34 NEW cov: 12130 ft: 14438 corp: 14/678b lim: 100 exec/s: 34 rss: 73Mb L: 38/89 MS: 1 CrossOver- 00:25:47.563 [2024-06-10 11:36:31.313182] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:25:47.563 [2024-06-10 11:36:31.313209] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:47.563 [2024-06-10 11:36:31.313266] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:25:47.563 [2024-06-10 11:36:31.313283] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:47.563 #35 NEW cov: 12130 ft: 14461 corp: 15/722b lim: 100 exec/s: 35 rss: 73Mb L: 44/89 MS: 1 ShuffleBytes- 00:25:47.563 [2024-06-10 11:36:31.363370] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:25:47.563 [2024-06-10 11:36:31.363397] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:47.563 [2024-06-10 11:36:31.363444] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:25:47.563 [2024-06-10 11:36:31.363464] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:47.563 [2024-06-10 11:36:31.363494] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:25:47.563 [2024-06-10 11:36:31.363508] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:25:47.563 #36 NEW cov: 12130 ft: 14476 corp: 16/796b lim: 100 exec/s: 36 rss: 73Mb L: 74/89 MS: 1 ChangeByte- 00:25:47.563 [2024-06-10 11:36:31.443576] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:25:47.563 [2024-06-10 11:36:31.443604] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:47.563 [2024-06-10 11:36:31.443651] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:25:47.563 [2024-06-10 11:36:31.443667] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:47.563 [2024-06-10 11:36:31.443696] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:25:47.563 [2024-06-10 11:36:31.443711] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:25:47.563 #37 NEW cov: 12130 ft: 14506 corp: 17/873b lim: 100 exec/s: 37 rss: 73Mb L: 77/89 MS: 1 InsertRepeatedBytes- 00:25:47.822 [2024-06-10 11:36:31.523735] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:25:47.822 [2024-06-10 11:36:31.523762] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:47.822 [2024-06-10 11:36:31.523810] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:25:47.822 [2024-06-10 11:36:31.523825] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:47.822 #38 NEW cov: 12130 ft: 14539 corp: 18/917b lim: 100 exec/s: 38 rss: 73Mb L: 44/89 MS: 1 ChangeByte- 00:25:47.822 [2024-06-10 11:36:31.573826] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:25:47.822 [2024-06-10 11:36:31.573854] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:47.822 [2024-06-10 11:36:31.573902] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:25:47.822 [2024-06-10 11:36:31.573918] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:47.822 #39 NEW cov: 12130 ft: 14559 corp: 19/961b lim: 100 exec/s: 39 rss: 73Mb L: 44/89 MS: 1 ChangeBit- 00:25:47.822 [2024-06-10 11:36:31.654100] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:25:47.822 [2024-06-10 11:36:31.654130] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:47.822 [2024-06-10 11:36:31.654162] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:25:47.822 [2024-06-10 11:36:31.654194] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:47.822 [2024-06-10 11:36:31.654224] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:25:47.822 [2024-06-10 11:36:31.654239] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:25:47.822 #40 NEW cov: 12130 ft: 14587 corp: 20/1032b lim: 100 exec/s: 40 rss: 73Mb L: 71/89 MS: 1 CopyPart- 00:25:47.822 [2024-06-10 11:36:31.714322] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:25:47.822 [2024-06-10 11:36:31.714351] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:47.822 [2024-06-10 11:36:31.714406] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:25:47.822 [2024-06-10 11:36:31.714422] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:47.822 [2024-06-10 11:36:31.714451] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:25:47.822 [2024-06-10 11:36:31.714465] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:25:47.822 [2024-06-10 11:36:31.714492] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:25:47.822 [2024-06-10 11:36:31.714507] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:25:47.822 #41 NEW cov: 12130 ft: 14600 corp: 21/1115b lim: 100 exec/s: 41 rss: 73Mb L: 83/89 MS: 1 InsertRepeatedBytes- 00:25:48.081 [2024-06-10 11:36:31.774501] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:25:48.081 [2024-06-10 11:36:31.774532] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:48.081 [2024-06-10 11:36:31.774564] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:25:48.081 [2024-06-10 11:36:31.774580] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:48.081 [2024-06-10 11:36:31.774610] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:25:48.081 [2024-06-10 11:36:31.774626] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:25:48.081 [2024-06-10 11:36:31.774655] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:25:48.081 [2024-06-10 11:36:31.774669] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:25:48.081 #42 NEW cov: 12130 ft: 14630 corp: 22/1204b lim: 100 exec/s: 42 rss: 73Mb L: 89/89 MS: 1 ChangeBinInt- 00:25:48.081 [2024-06-10 11:36:31.854619] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:25:48.081 [2024-06-10 11:36:31.854648] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:48.081 [2024-06-10 11:36:31.854696] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:25:48.081 [2024-06-10 11:36:31.854713] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:48.081 #43 NEW cov: 12130 ft: 14656 corp: 23/1245b lim: 100 exec/s: 43 rss: 73Mb L: 41/89 MS: 1 EraseBytes- 00:25:48.081 [2024-06-10 11:36:31.934889] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:25:48.081 [2024-06-10 11:36:31.934922] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:48.081 [2024-06-10 11:36:31.934956] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:25:48.081 [2024-06-10 11:36:31.934972] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:48.081 #44 NEW cov: 12130 ft: 14709 corp: 24/1292b lim: 100 exec/s: 44 rss: 73Mb L: 47/89 MS: 1 ChangeBit- 00:25:48.081 [2024-06-10 11:36:31.995026] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:25:48.081 [2024-06-10 11:36:31.995055] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:48.081 [2024-06-10 11:36:31.995102] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:25:48.081 [2024-06-10 11:36:31.995122] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:48.081 [2024-06-10 11:36:31.995152] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:25:48.081 [2024-06-10 11:36:31.995167] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:25:48.341 #45 NEW cov: 12137 ft: 14737 corp: 25/1370b lim: 100 exec/s: 45 rss: 74Mb L: 78/89 MS: 1 InsertByte- 00:25:48.341 [2024-06-10 11:36:32.075287] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:25:48.341 [2024-06-10 11:36:32.075320] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:48.341 [2024-06-10 11:36:32.075353] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:25:48.341 [2024-06-10 11:36:32.075370] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:48.341 [2024-06-10 11:36:32.075399] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:25:48.341 [2024-06-10 11:36:32.075414] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:25:48.341 #46 NEW cov: 12137 ft: 14766 corp: 26/1448b lim: 100 exec/s: 23 rss: 74Mb L: 78/89 MS: 1 InsertRepeatedBytes- 00:25:48.341 #46 DONE cov: 12137 ft: 14766 corp: 26/1448b lim: 100 exec/s: 23 rss: 74Mb 00:25:48.341 Done 46 runs in 2 second(s) 00:25:48.341 11:36:32 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_18.conf /var/tmp/suppress_nvmf_fuzz 00:25:48.341 11:36:32 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:25:48.341 11:36:32 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:25:48.341 11:36:32 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 19 1 0x1 00:25:48.341 11:36:32 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=19 00:25:48.341 11:36:32 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:25:48.341 11:36:32 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:25:48.341 11:36:32 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 00:25:48.341 11:36:32 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_19.conf 00:25:48.341 11:36:32 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:25:48.341 11:36:32 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:25:48.341 11:36:32 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 19 00:25:48.341 11:36:32 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4419 00:25:48.341 11:36:32 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 00:25:48.341 11:36:32 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4419' 00:25:48.341 11:36:32 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4419"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:25:48.341 11:36:32 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:25:48.341 11:36:32 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:25:48.341 11:36:32 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4419' -c /tmp/fuzz_json_19.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 -Z 19 00:25:48.601 [2024-06-10 11:36:32.298739] Starting SPDK v24.09-pre git sha1 a3f6419f1 / DPDK 24.03.0 initialization... 00:25:48.601 [2024-06-10 11:36:32.298812] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3335751 ] 00:25:48.601 EAL: No free 2048 kB hugepages reported on node 1 00:25:48.860 [2024-06-10 11:36:32.629081] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:48.860 [2024-06-10 11:36:32.726062] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:25:48.860 [2024-06-10 11:36:32.786331] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:48.860 [2024-06-10 11:36:32.802578] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4419 *** 00:25:49.119 INFO: Running with entropic power schedule (0xFF, 100). 00:25:49.119 INFO: Seed: 2833224785 00:25:49.119 INFO: Loaded 1 modules (357716 inline 8-bit counters): 357716 [0x29a784c, 0x29feda0), 00:25:49.119 INFO: Loaded 1 PC tables (357716 PCs): 357716 [0x29feda0,0x2f742e0), 00:25:49.119 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 00:25:49.119 INFO: A corpus is not provided, starting from an empty corpus 00:25:49.119 #2 INITED exec/s: 0 rss: 65Mb 00:25:49.119 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:25:49.119 This may also happen if the target rejected all inputs we tried so far 00:25:49.119 [2024-06-10 11:36:32.847806] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:1247480320 len:1 00:25:49.119 [2024-06-10 11:36:32.847838] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:49.119 [2024-06-10 11:36:32.847892] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:0 len:1 00:25:49.119 [2024-06-10 11:36:32.847909] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:49.377 NEW_FUNC[1/686]: 0x4a3570 in fuzz_nvm_write_uncorrectable_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:582 00:25:49.377 NEW_FUNC[2/686]: 0x4bf580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:25:49.377 #22 NEW cov: 11870 ft: 11870 corp: 2/28b lim: 50 exec/s: 0 rss: 71Mb L: 27/27 MS: 5 CopyPart-ChangeBit-InsertByte-CrossOver-InsertRepeatedBytes- 00:25:49.377 [2024-06-10 11:36:33.188620] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:1247480320 len:1 00:25:49.377 [2024-06-10 11:36:33.188661] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:49.377 [2024-06-10 11:36:33.188729] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:0 len:1 00:25:49.377 [2024-06-10 11:36:33.188746] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:49.377 #23 NEW cov: 12000 ft: 12296 corp: 3/55b lim: 50 exec/s: 0 rss: 72Mb L: 27/27 MS: 1 ChangeByte- 00:25:49.377 [2024-06-10 11:36:33.238689] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:1257638400 len:1 00:25:49.377 [2024-06-10 11:36:33.238722] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:49.377 [2024-06-10 11:36:33.238791] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:0 len:1 00:25:49.377 [2024-06-10 11:36:33.238808] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:49.378 #24 NEW cov: 12006 ft: 12678 corp: 4/82b lim: 50 exec/s: 0 rss: 72Mb L: 27/27 MS: 1 ChangeByte- 00:25:49.378 [2024-06-10 11:36:33.288797] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:1247480322 len:1 00:25:49.378 [2024-06-10 11:36:33.288826] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:49.378 [2024-06-10 11:36:33.288883] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:0 len:1 00:25:49.378 [2024-06-10 11:36:33.288902] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:49.378 #25 NEW cov: 12091 ft: 12997 corp: 5/109b lim: 50 exec/s: 0 rss: 72Mb L: 27/27 MS: 1 ChangeBinInt- 00:25:49.637 [2024-06-10 11:36:33.328925] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:1247480322 len:247 00:25:49.637 [2024-06-10 11:36:33.328954] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:49.637 [2024-06-10 11:36:33.328996] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:0 len:1 00:25:49.637 [2024-06-10 11:36:33.329012] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:49.637 #26 NEW cov: 12091 ft: 13033 corp: 6/136b lim: 50 exec/s: 0 rss: 72Mb L: 27/27 MS: 1 ChangeBinInt- 00:25:49.637 [2024-06-10 11:36:33.379064] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:1247480320 len:1 00:25:49.637 [2024-06-10 11:36:33.379091] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:49.637 [2024-06-10 11:36:33.379141] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:0 len:1 00:25:49.637 [2024-06-10 11:36:33.379158] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:49.637 #27 NEW cov: 12091 ft: 13106 corp: 7/163b lim: 50 exec/s: 0 rss: 72Mb L: 27/27 MS: 1 ShuffleBytes- 00:25:49.637 [2024-06-10 11:36:33.419388] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:1247480320 len:1 00:25:49.637 [2024-06-10 11:36:33.419414] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:49.637 [2024-06-10 11:36:33.419479] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:0 len:1 00:25:49.637 [2024-06-10 11:36:33.419494] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:49.637 [2024-06-10 11:36:33.419547] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:498216206336 len:29813 00:25:49.637 [2024-06-10 11:36:33.419564] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:25:49.637 [2024-06-10 11:36:33.419616] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:8391460049216894068 len:29813 00:25:49.637 [2024-06-10 11:36:33.419631] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:25:49.637 #28 NEW cov: 12091 ft: 13457 corp: 8/208b lim: 50 exec/s: 0 rss: 72Mb L: 45/45 MS: 1 InsertRepeatedBytes- 00:25:49.637 [2024-06-10 11:36:33.469205] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:4140040192 len:1 00:25:49.637 [2024-06-10 11:36:33.469233] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:49.637 #32 NEW cov: 12091 ft: 13781 corp: 9/227b lim: 50 exec/s: 0 rss: 72Mb L: 19/45 MS: 4 InsertByte-ChangeBinInt-ShuffleBytes-CrossOver- 00:25:49.637 [2024-06-10 11:36:33.509682] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:1247480320 len:1 00:25:49.637 [2024-06-10 11:36:33.509709] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:49.637 [2024-06-10 11:36:33.509765] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:0 len:1 00:25:49.637 [2024-06-10 11:36:33.509781] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:49.637 [2024-06-10 11:36:33.509835] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:0 len:1 00:25:49.637 [2024-06-10 11:36:33.509851] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:25:49.637 [2024-06-10 11:36:33.509904] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:0 len:1 00:25:49.637 [2024-06-10 11:36:33.509921] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:25:49.637 #33 NEW cov: 12091 ft: 13827 corp: 10/274b lim: 50 exec/s: 0 rss: 72Mb L: 47/47 MS: 1 InsertRepeatedBytes- 00:25:49.637 [2024-06-10 11:36:33.549417] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:757171882496 len:1 00:25:49.637 [2024-06-10 11:36:33.549444] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:49.637 #34 NEW cov: 12091 ft: 13893 corp: 11/288b lim: 50 exec/s: 0 rss: 72Mb L: 14/47 MS: 1 EraseBytes- 00:25:49.897 [2024-06-10 11:36:33.599866] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:1247480320 len:8193 00:25:49.897 [2024-06-10 11:36:33.599895] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:49.897 [2024-06-10 11:36:33.599953] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:0 len:1 00:25:49.897 [2024-06-10 11:36:33.599968] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:49.897 [2024-06-10 11:36:33.600023] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:0 len:1 00:25:49.897 [2024-06-10 11:36:33.600041] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:25:49.897 [2024-06-10 11:36:33.600093] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:0 len:1 00:25:49.897 [2024-06-10 11:36:33.600109] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:25:49.897 #35 NEW cov: 12091 ft: 13910 corp: 12/335b lim: 50 exec/s: 0 rss: 72Mb L: 47/47 MS: 1 ChangeBit- 00:25:49.897 [2024-06-10 11:36:33.649812] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:4140040192 len:1 00:25:49.897 [2024-06-10 11:36:33.649838] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:49.897 [2024-06-10 11:36:33.649887] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744069414584320 len:65281 00:25:49.897 [2024-06-10 11:36:33.649903] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:49.897 #36 NEW cov: 12091 ft: 13934 corp: 13/359b lim: 50 exec/s: 0 rss: 72Mb L: 24/47 MS: 1 InsertRepeatedBytes- 00:25:49.897 [2024-06-10 11:36:33.700160] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:1247480320 len:1 00:25:49.897 [2024-06-10 11:36:33.700188] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:49.897 [2024-06-10 11:36:33.700254] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:0 len:1 00:25:49.897 [2024-06-10 11:36:33.700266] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:49.897 [2024-06-10 11:36:33.700293] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:0 len:1 00:25:49.897 [2024-06-10 11:36:33.700313] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:25:49.897 [2024-06-10 11:36:33.700365] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:0 len:177 00:25:49.897 [2024-06-10 11:36:33.700381] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:25:49.897 #37 NEW cov: 12091 ft: 13983 corp: 14/406b lim: 50 exec/s: 0 rss: 72Mb L: 47/47 MS: 1 ShuffleBytes- 00:25:49.897 [2024-06-10 11:36:33.739954] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:1241513984 len:1 00:25:49.897 [2024-06-10 11:36:33.739982] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:49.897 NEW_FUNC[1/1]: 0x1a77150 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:601 00:25:49.897 #43 NEW cov: 12114 ft: 14021 corp: 15/422b lim: 50 exec/s: 0 rss: 73Mb L: 16/47 MS: 1 EraseBytes- 00:25:49.897 [2024-06-10 11:36:33.780061] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:2051693322 len:1 00:25:49.897 [2024-06-10 11:36:33.780088] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:49.897 #45 NEW cov: 12114 ft: 14111 corp: 16/436b lim: 50 exec/s: 0 rss: 73Mb L: 14/47 MS: 2 ChangeByte-CrossOver- 00:25:49.897 [2024-06-10 11:36:33.820281] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:1247480322 len:247 00:25:49.897 [2024-06-10 11:36:33.820308] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:49.897 [2024-06-10 11:36:33.820360] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:0 len:1 00:25:49.897 [2024-06-10 11:36:33.820377] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:50.157 #46 NEW cov: 12114 ft: 14138 corp: 17/459b lim: 50 exec/s: 46 rss: 73Mb L: 23/47 MS: 1 EraseBytes- 00:25:50.157 [2024-06-10 11:36:33.870444] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:1247480322 len:1 00:25:50.157 [2024-06-10 11:36:33.870472] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:50.157 [2024-06-10 11:36:33.870537] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:655360 len:1 00:25:50.157 [2024-06-10 11:36:33.870556] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:50.157 #47 NEW cov: 12114 ft: 14150 corp: 18/486b lim: 50 exec/s: 47 rss: 73Mb L: 27/47 MS: 1 CrossOver- 00:25:50.157 [2024-06-10 11:36:33.910637] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:281471929223680 len:65536 00:25:50.157 [2024-06-10 11:36:33.910664] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:50.157 [2024-06-10 11:36:33.910712] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:4278190080 len:1 00:25:50.157 [2024-06-10 11:36:33.910728] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:50.157 [2024-06-10 11:36:33.910780] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:0 len:1 00:25:50.157 [2024-06-10 11:36:33.910795] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:25:50.157 #48 NEW cov: 12114 ft: 14402 corp: 19/518b lim: 50 exec/s: 48 rss: 73Mb L: 32/47 MS: 1 InsertRepeatedBytes- 00:25:50.157 [2024-06-10 11:36:33.950526] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:2051693322 len:1 00:25:50.157 [2024-06-10 11:36:33.950556] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:50.157 #49 NEW cov: 12114 ft: 14419 corp: 20/532b lim: 50 exec/s: 49 rss: 73Mb L: 14/47 MS: 1 ShuffleBytes- 00:25:50.157 [2024-06-10 11:36:34.000789] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:1247480320 len:1 00:25:50.157 [2024-06-10 11:36:34.000816] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:50.157 [2024-06-10 11:36:34.000869] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:0 len:1 00:25:50.157 [2024-06-10 11:36:34.000885] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:50.157 #50 NEW cov: 12114 ft: 14428 corp: 21/559b lim: 50 exec/s: 50 rss: 73Mb L: 27/47 MS: 1 ShuffleBytes- 00:25:50.157 [2024-06-10 11:36:34.041164] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:1247480320 len:1 00:25:50.157 [2024-06-10 11:36:34.041190] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:50.157 [2024-06-10 11:36:34.041257] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:0 len:1 00:25:50.157 [2024-06-10 11:36:34.041273] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:50.157 [2024-06-10 11:36:34.041333] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:498216206336 len:29813 00:25:50.157 [2024-06-10 11:36:34.041348] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:25:50.157 [2024-06-10 11:36:34.041400] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:8358680910353429620 len:1 00:25:50.157 [2024-06-10 11:36:34.041414] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:25:50.157 #51 NEW cov: 12114 ft: 14458 corp: 22/604b lim: 50 exec/s: 51 rss: 73Mb L: 45/47 MS: 1 CopyPart- 00:25:50.157 [2024-06-10 11:36:34.090924] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:1247480320 len:1 00:25:50.157 [2024-06-10 11:36:34.090954] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:50.416 #52 NEW cov: 12114 ft: 14465 corp: 23/619b lim: 50 exec/s: 52 rss: 73Mb L: 15/47 MS: 1 EraseBytes- 00:25:50.416 [2024-06-10 11:36:34.131203] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:1247480322 len:247 00:25:50.416 [2024-06-10 11:36:34.131230] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:50.416 [2024-06-10 11:36:34.131287] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:0 len:8193 00:25:50.416 [2024-06-10 11:36:34.131305] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:50.416 #53 NEW cov: 12114 ft: 14512 corp: 24/642b lim: 50 exec/s: 53 rss: 73Mb L: 23/47 MS: 1 CrossOver- 00:25:50.416 [2024-06-10 11:36:34.181255] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:36411431690 len:1 00:25:50.416 [2024-06-10 11:36:34.181283] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:50.416 #54 NEW cov: 12114 ft: 14540 corp: 25/656b lim: 50 exec/s: 54 rss: 73Mb L: 14/47 MS: 1 ChangeBit- 00:25:50.416 [2024-06-10 11:36:34.231477] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:1247480320 len:1 00:25:50.416 [2024-06-10 11:36:34.231507] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:50.417 [2024-06-10 11:36:34.231572] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:0 len:1 00:25:50.417 [2024-06-10 11:36:34.231589] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:50.417 #55 NEW cov: 12114 ft: 14551 corp: 26/684b lim: 50 exec/s: 55 rss: 73Mb L: 28/47 MS: 1 InsertByte- 00:25:50.417 [2024-06-10 11:36:34.271769] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:1247524352 len:1 00:25:50.417 [2024-06-10 11:36:34.271795] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:50.417 [2024-06-10 11:36:34.271844] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:0 len:1 00:25:50.417 [2024-06-10 11:36:34.271860] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:50.417 [2024-06-10 11:36:34.271925] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:498216206336 len:29813 00:25:50.417 [2024-06-10 11:36:34.271941] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:25:50.417 [2024-06-10 11:36:34.271991] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:8358680910353429620 len:1 00:25:50.417 [2024-06-10 11:36:34.272006] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:25:50.417 #56 NEW cov: 12114 ft: 14561 corp: 27/729b lim: 50 exec/s: 56 rss: 73Mb L: 45/47 MS: 1 ChangeByte- 00:25:50.417 [2024-06-10 11:36:34.321605] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:1247480322 len:1 00:25:50.417 [2024-06-10 11:36:34.321632] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:50.417 #57 NEW cov: 12114 ft: 14567 corp: 28/746b lim: 50 exec/s: 57 rss: 73Mb L: 17/47 MS: 1 EraseBytes- 00:25:50.676 [2024-06-10 11:36:34.371764] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:36411431690 len:1 00:25:50.676 [2024-06-10 11:36:34.371793] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:50.676 #58 NEW cov: 12114 ft: 14622 corp: 29/758b lim: 50 exec/s: 58 rss: 73Mb L: 12/47 MS: 1 EraseBytes- 00:25:50.676 [2024-06-10 11:36:34.422003] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:144115189323333642 len:1 00:25:50.676 [2024-06-10 11:36:34.422031] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:50.676 [2024-06-10 11:36:34.422081] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:0 len:1 00:25:50.676 [2024-06-10 11:36:34.422096] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:50.676 #59 NEW cov: 12114 ft: 14675 corp: 30/785b lim: 50 exec/s: 59 rss: 73Mb L: 27/47 MS: 1 ShuffleBytes- 00:25:50.676 [2024-06-10 11:36:34.462108] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:29953351030407168 len:27243 00:25:50.676 [2024-06-10 11:36:34.462135] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:50.676 [2024-06-10 11:36:34.462172] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:7668058320836127338 len:27243 00:25:50.676 [2024-06-10 11:36:34.462188] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:50.676 #63 NEW cov: 12114 ft: 14678 corp: 31/810b lim: 50 exec/s: 63 rss: 73Mb L: 25/47 MS: 4 ChangeByte-ShuffleBytes-InsertRepeatedBytes-InsertRepeatedBytes- 00:25:50.676 [2024-06-10 11:36:34.502090] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:270481107913216 len:1 00:25:50.676 [2024-06-10 11:36:34.502118] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:50.676 #64 NEW cov: 12114 ft: 14689 corp: 32/825b lim: 50 exec/s: 64 rss: 73Mb L: 15/47 MS: 1 CrossOver- 00:25:50.676 [2024-06-10 11:36:34.552580] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:1247480320 len:8193 00:25:50.676 [2024-06-10 11:36:34.552609] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:50.676 [2024-06-10 11:36:34.552652] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:0 len:1 00:25:50.676 [2024-06-10 11:36:34.552668] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:50.676 [2024-06-10 11:36:34.552737] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:0 len:1 00:25:50.676 [2024-06-10 11:36:34.552753] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:25:50.676 [2024-06-10 11:36:34.552805] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:0 len:1 00:25:50.676 [2024-06-10 11:36:34.552821] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:25:50.676 #65 NEW cov: 12114 ft: 14692 corp: 33/872b lim: 50 exec/s: 65 rss: 74Mb L: 47/47 MS: 1 CopyPart- 00:25:50.676 [2024-06-10 11:36:34.602580] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:1247480322 len:221 00:25:50.676 [2024-06-10 11:36:34.602610] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:50.676 [2024-06-10 11:36:34.602644] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:15852670692049607900 len:1 00:25:50.676 [2024-06-10 11:36:34.602658] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:50.676 [2024-06-10 11:36:34.602708] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:0 len:1 00:25:50.676 [2024-06-10 11:36:34.602723] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:25:50.676 #66 NEW cov: 12114 ft: 14724 corp: 34/905b lim: 50 exec/s: 66 rss: 74Mb L: 33/47 MS: 1 InsertRepeatedBytes- 00:25:50.936 [2024-06-10 11:36:34.642643] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:1247481858 len:247 00:25:50.936 [2024-06-10 11:36:34.642672] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:50.936 [2024-06-10 11:36:34.642714] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:0 len:8193 00:25:50.936 [2024-06-10 11:36:34.642732] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:50.936 #67 NEW cov: 12114 ft: 14732 corp: 35/928b lim: 50 exec/s: 67 rss: 74Mb L: 23/47 MS: 1 ChangeBinInt- 00:25:50.936 [2024-06-10 11:36:34.692734] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:1247480320 len:1 00:25:50.936 [2024-06-10 11:36:34.692761] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:50.936 [2024-06-10 11:36:34.692809] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:0 len:19 00:25:50.936 [2024-06-10 11:36:34.692831] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:50.936 #68 NEW cov: 12114 ft: 14746 corp: 36/957b lim: 50 exec/s: 68 rss: 74Mb L: 29/47 MS: 1 InsertByte- 00:25:50.936 [2024-06-10 11:36:34.743100] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:1247480320 len:8193 00:25:50.936 [2024-06-10 11:36:34.743127] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:50.936 [2024-06-10 11:36:34.743173] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:0 len:1 00:25:50.936 [2024-06-10 11:36:34.743190] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:50.936 [2024-06-10 11:36:34.743240] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:0 len:1 00:25:50.936 [2024-06-10 11:36:34.743261] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:25:50.936 [2024-06-10 11:36:34.743313] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:0 len:1 00:25:50.936 [2024-06-10 11:36:34.743328] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:25:50.936 #69 NEW cov: 12114 ft: 14753 corp: 37/1006b lim: 50 exec/s: 69 rss: 74Mb L: 49/49 MS: 1 CopyPart- 00:25:50.936 [2024-06-10 11:36:34.793045] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:1247480320 len:1 00:25:50.936 [2024-06-10 11:36:34.793072] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:50.936 [2024-06-10 11:36:34.793120] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:4849664 len:1 00:25:50.936 [2024-06-10 11:36:34.793136] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:50.936 #70 NEW cov: 12114 ft: 14783 corp: 38/1033b lim: 50 exec/s: 70 rss: 74Mb L: 27/49 MS: 1 CopyPart- 00:25:50.936 [2024-06-10 11:36:34.833238] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:144115188084276992 len:1 00:25:50.936 [2024-06-10 11:36:34.833270] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:50.936 [2024-06-10 11:36:34.833325] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:0 len:1 00:25:50.936 [2024-06-10 11:36:34.833343] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:50.936 [2024-06-10 11:36:34.833397] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:0 len:1 00:25:50.936 [2024-06-10 11:36:34.833414] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:25:50.936 #75 NEW cov: 12114 ft: 14790 corp: 39/1068b lim: 50 exec/s: 37 rss: 74Mb L: 35/49 MS: 5 EraseBytes-ChangeBit-ChangeBit-InsertByte-InsertRepeatedBytes- 00:25:50.936 #75 DONE cov: 12114 ft: 14790 corp: 39/1068b lim: 50 exec/s: 37 rss: 74Mb 00:25:50.936 Done 75 runs in 2 second(s) 00:25:51.196 11:36:34 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_19.conf /var/tmp/suppress_nvmf_fuzz 00:25:51.196 11:36:34 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:25:51.196 11:36:34 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:25:51.196 11:36:34 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 20 1 0x1 00:25:51.196 11:36:34 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=20 00:25:51.196 11:36:34 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:25:51.196 11:36:34 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:25:51.196 11:36:34 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 00:25:51.196 11:36:34 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_20.conf 00:25:51.196 11:36:34 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:25:51.196 11:36:34 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:25:51.196 11:36:34 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 20 00:25:51.196 11:36:34 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4420 00:25:51.196 11:36:34 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 00:25:51.196 11:36:35 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4420' 00:25:51.196 11:36:35 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4420"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:25:51.196 11:36:35 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:25:51.196 11:36:35 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:25:51.196 11:36:35 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4420' -c /tmp/fuzz_json_20.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 -Z 20 00:25:51.196 [2024-06-10 11:36:35.040658] Starting SPDK v24.09-pre git sha1 a3f6419f1 / DPDK 24.03.0 initialization... 00:25:51.196 [2024-06-10 11:36:35.040739] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3336111 ] 00:25:51.196 EAL: No free 2048 kB hugepages reported on node 1 00:25:51.455 [2024-06-10 11:36:35.377766] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:51.714 [2024-06-10 11:36:35.478656] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:25:51.714 [2024-06-10 11:36:35.539137] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:51.714 [2024-06-10 11:36:35.555393] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:25:51.714 INFO: Running with entropic power schedule (0xFF, 100). 00:25:51.714 INFO: Seed: 1290246149 00:25:51.714 INFO: Loaded 1 modules (357716 inline 8-bit counters): 357716 [0x29a784c, 0x29feda0), 00:25:51.714 INFO: Loaded 1 PC tables (357716 PCs): 357716 [0x29feda0,0x2f742e0), 00:25:51.714 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 00:25:51.714 INFO: A corpus is not provided, starting from an empty corpus 00:25:51.714 #2 INITED exec/s: 0 rss: 64Mb 00:25:51.714 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:25:51.714 This may also happen if the target rejected all inputs we tried so far 00:25:51.714 [2024-06-10 11:36:35.604136] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:25:51.714 [2024-06-10 11:36:35.604169] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:51.714 [2024-06-10 11:36:35.604204] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:25:51.714 [2024-06-10 11:36:35.604219] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:51.714 [2024-06-10 11:36:35.604279] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:25:51.714 [2024-06-10 11:36:35.604296] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:25:51.714 [2024-06-10 11:36:35.604352] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:25:51.715 [2024-06-10 11:36:35.604372] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:25:52.283 NEW_FUNC[1/688]: 0x4a5130 in fuzz_nvm_reservation_acquire_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:597 00:25:52.283 NEW_FUNC[2/688]: 0x4bf580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:25:52.283 #13 NEW cov: 11928 ft: 11923 corp: 2/87b lim: 90 exec/s: 0 rss: 72Mb L: 86/86 MS: 1 InsertRepeatedBytes- 00:25:52.283 [2024-06-10 11:36:35.954959] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:25:52.283 [2024-06-10 11:36:35.955004] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:52.283 [2024-06-10 11:36:35.955062] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:25:52.283 [2024-06-10 11:36:35.955078] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:52.283 [2024-06-10 11:36:35.955133] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:25:52.283 [2024-06-10 11:36:35.955149] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:25:52.283 [2024-06-10 11:36:35.955202] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:25:52.283 [2024-06-10 11:36:35.955218] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:25:52.283 #14 NEW cov: 12058 ft: 12432 corp: 3/173b lim: 90 exec/s: 0 rss: 72Mb L: 86/86 MS: 1 CopyPart- 00:25:52.283 [2024-06-10 11:36:36.004974] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:25:52.283 [2024-06-10 11:36:36.005006] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:52.283 [2024-06-10 11:36:36.005042] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:25:52.283 [2024-06-10 11:36:36.005058] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:52.283 [2024-06-10 11:36:36.005112] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:25:52.283 [2024-06-10 11:36:36.005127] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:25:52.283 [2024-06-10 11:36:36.005180] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:25:52.283 [2024-06-10 11:36:36.005194] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:25:52.283 #15 NEW cov: 12064 ft: 12839 corp: 4/260b lim: 90 exec/s: 0 rss: 72Mb L: 87/87 MS: 1 CrossOver- 00:25:52.283 [2024-06-10 11:36:36.045111] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:25:52.283 [2024-06-10 11:36:36.045138] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:52.283 [2024-06-10 11:36:36.045184] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:25:52.283 [2024-06-10 11:36:36.045199] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:52.283 [2024-06-10 11:36:36.045253] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:25:52.283 [2024-06-10 11:36:36.045268] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:25:52.283 [2024-06-10 11:36:36.045320] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:25:52.283 [2024-06-10 11:36:36.045337] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:25:52.283 #16 NEW cov: 12149 ft: 13039 corp: 5/346b lim: 90 exec/s: 0 rss: 72Mb L: 86/87 MS: 1 ChangeByte- 00:25:52.283 [2024-06-10 11:36:36.095180] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:25:52.283 [2024-06-10 11:36:36.095206] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:52.283 [2024-06-10 11:36:36.095258] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:25:52.283 [2024-06-10 11:36:36.095274] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:52.283 [2024-06-10 11:36:36.095327] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:25:52.283 [2024-06-10 11:36:36.095342] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:25:52.283 [2024-06-10 11:36:36.095392] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:25:52.283 [2024-06-10 11:36:36.095407] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:25:52.283 #17 NEW cov: 12149 ft: 13185 corp: 6/432b lim: 90 exec/s: 0 rss: 72Mb L: 86/87 MS: 1 ChangeByte- 00:25:52.283 [2024-06-10 11:36:36.135341] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:25:52.283 [2024-06-10 11:36:36.135368] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:52.283 [2024-06-10 11:36:36.135414] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:25:52.283 [2024-06-10 11:36:36.135431] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:52.283 [2024-06-10 11:36:36.135484] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:25:52.283 [2024-06-10 11:36:36.135498] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:25:52.283 [2024-06-10 11:36:36.135551] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:25:52.283 [2024-06-10 11:36:36.135565] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:25:52.283 #18 NEW cov: 12149 ft: 13316 corp: 7/518b lim: 90 exec/s: 0 rss: 72Mb L: 86/87 MS: 1 ChangeBinInt- 00:25:52.283 [2024-06-10 11:36:36.174992] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:25:52.283 [2024-06-10 11:36:36.175019] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:52.283 #20 NEW cov: 12149 ft: 14269 corp: 8/536b lim: 90 exec/s: 0 rss: 72Mb L: 18/87 MS: 2 ChangeByte-CrossOver- 00:25:52.283 [2024-06-10 11:36:36.215120] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:25:52.283 [2024-06-10 11:36:36.215148] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:52.542 #21 NEW cov: 12149 ft: 14290 corp: 9/558b lim: 90 exec/s: 0 rss: 72Mb L: 22/87 MS: 1 CMP- DE: "\001\000\000]"- 00:25:52.542 [2024-06-10 11:36:36.265279] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:25:52.542 [2024-06-10 11:36:36.265306] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:52.542 #22 NEW cov: 12149 ft: 14328 corp: 10/580b lim: 90 exec/s: 0 rss: 73Mb L: 22/87 MS: 1 ChangeByte- 00:25:52.542 [2024-06-10 11:36:36.315401] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:25:52.542 [2024-06-10 11:36:36.315429] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:52.542 #24 NEW cov: 12149 ft: 14379 corp: 11/615b lim: 90 exec/s: 0 rss: 73Mb L: 35/87 MS: 2 ChangeBit-InsertRepeatedBytes- 00:25:52.542 [2024-06-10 11:36:36.355932] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:25:52.542 [2024-06-10 11:36:36.355959] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:52.542 [2024-06-10 11:36:36.356007] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:25:52.542 [2024-06-10 11:36:36.356024] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:52.542 [2024-06-10 11:36:36.356077] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:25:52.542 [2024-06-10 11:36:36.356092] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:25:52.542 [2024-06-10 11:36:36.356144] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:25:52.543 [2024-06-10 11:36:36.356159] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:25:52.543 #25 NEW cov: 12149 ft: 14415 corp: 12/701b lim: 90 exec/s: 0 rss: 73Mb L: 86/87 MS: 1 ChangeBinInt- 00:25:52.543 [2024-06-10 11:36:36.395621] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:25:52.543 [2024-06-10 11:36:36.395647] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:52.543 #29 NEW cov: 12149 ft: 14433 corp: 13/722b lim: 90 exec/s: 0 rss: 73Mb L: 21/87 MS: 4 EraseBytes-PersAutoDict-ChangeByte-PersAutoDict- DE: "\001\000\000]"-"\001\000\000]"- 00:25:52.543 [2024-06-10 11:36:36.436212] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:25:52.543 [2024-06-10 11:36:36.436240] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:52.543 [2024-06-10 11:36:36.436309] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:25:52.543 [2024-06-10 11:36:36.436326] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:52.543 [2024-06-10 11:36:36.436382] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:25:52.543 [2024-06-10 11:36:36.436396] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:25:52.543 [2024-06-10 11:36:36.436451] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:25:52.543 [2024-06-10 11:36:36.436467] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:25:52.543 #30 NEW cov: 12149 ft: 14467 corp: 14/808b lim: 90 exec/s: 0 rss: 73Mb L: 86/87 MS: 1 ChangeByte- 00:25:52.543 [2024-06-10 11:36:36.486319] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:25:52.543 [2024-06-10 11:36:36.486345] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:52.543 [2024-06-10 11:36:36.486394] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:25:52.543 [2024-06-10 11:36:36.486410] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:52.543 [2024-06-10 11:36:36.486482] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:25:52.543 [2024-06-10 11:36:36.486498] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:25:52.543 [2024-06-10 11:36:36.486552] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:25:52.543 [2024-06-10 11:36:36.486568] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:25:52.802 NEW_FUNC[1/1]: 0x1a77150 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:601 00:25:52.802 #31 NEW cov: 12172 ft: 14510 corp: 15/894b lim: 90 exec/s: 0 rss: 73Mb L: 86/87 MS: 1 PersAutoDict- DE: "\001\000\000]"- 00:25:52.802 [2024-06-10 11:36:36.536470] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:25:52.802 [2024-06-10 11:36:36.536497] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:52.802 [2024-06-10 11:36:36.536543] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:25:52.802 [2024-06-10 11:36:36.536559] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:52.802 [2024-06-10 11:36:36.536613] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:25:52.802 [2024-06-10 11:36:36.536629] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:25:52.802 [2024-06-10 11:36:36.536681] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:25:52.802 [2024-06-10 11:36:36.536696] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:25:52.802 #32 NEW cov: 12172 ft: 14532 corp: 16/980b lim: 90 exec/s: 0 rss: 73Mb L: 86/87 MS: 1 CrossOver- 00:25:52.802 [2024-06-10 11:36:36.586464] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:25:52.802 [2024-06-10 11:36:36.586491] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:52.802 [2024-06-10 11:36:36.586542] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:25:52.802 [2024-06-10 11:36:36.586557] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:52.802 [2024-06-10 11:36:36.586612] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:25:52.802 [2024-06-10 11:36:36.586627] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:25:52.802 #33 NEW cov: 12172 ft: 14814 corp: 17/1039b lim: 90 exec/s: 33 rss: 73Mb L: 59/87 MS: 1 CrossOver- 00:25:52.802 [2024-06-10 11:36:36.636760] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:25:52.802 [2024-06-10 11:36:36.636786] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:52.802 [2024-06-10 11:36:36.636852] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:25:52.802 [2024-06-10 11:36:36.636869] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:52.802 [2024-06-10 11:36:36.636922] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:25:52.802 [2024-06-10 11:36:36.636937] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:25:52.802 [2024-06-10 11:36:36.636992] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:25:52.802 [2024-06-10 11:36:36.637012] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:25:52.802 #34 NEW cov: 12172 ft: 14833 corp: 18/1125b lim: 90 exec/s: 34 rss: 73Mb L: 86/87 MS: 1 CopyPart- 00:25:52.802 [2024-06-10 11:36:36.676886] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:25:52.802 [2024-06-10 11:36:36.676912] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:52.802 [2024-06-10 11:36:36.676961] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:25:52.802 [2024-06-10 11:36:36.676977] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:52.802 [2024-06-10 11:36:36.677029] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:25:52.802 [2024-06-10 11:36:36.677046] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:25:52.802 [2024-06-10 11:36:36.677099] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:25:52.802 [2024-06-10 11:36:36.677115] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:25:52.802 #35 NEW cov: 12172 ft: 14883 corp: 19/1211b lim: 90 exec/s: 35 rss: 73Mb L: 86/87 MS: 1 ChangeBit- 00:25:52.803 [2024-06-10 11:36:36.717135] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:25:52.803 [2024-06-10 11:36:36.717161] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:52.803 [2024-06-10 11:36:36.717217] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:25:52.803 [2024-06-10 11:36:36.717234] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:52.803 [2024-06-10 11:36:36.717293] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:25:52.803 [2024-06-10 11:36:36.717308] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:25:52.803 [2024-06-10 11:36:36.717362] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:25:52.803 [2024-06-10 11:36:36.717378] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:25:52.803 [2024-06-10 11:36:36.717432] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:4 nsid:0 00:25:52.803 [2024-06-10 11:36:36.717448] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:25:53.062 #36 NEW cov: 12172 ft: 14949 corp: 20/1301b lim: 90 exec/s: 36 rss: 73Mb L: 90/90 MS: 1 InsertRepeatedBytes- 00:25:53.062 [2024-06-10 11:36:36.766714] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:25:53.062 [2024-06-10 11:36:36.766741] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:53.062 #37 NEW cov: 12172 ft: 15037 corp: 21/1322b lim: 90 exec/s: 37 rss: 73Mb L: 21/90 MS: 1 EraseBytes- 00:25:53.062 [2024-06-10 11:36:36.817252] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:25:53.062 [2024-06-10 11:36:36.817278] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:53.062 [2024-06-10 11:36:36.817335] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:25:53.062 [2024-06-10 11:36:36.817348] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:53.062 [2024-06-10 11:36:36.817406] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:25:53.062 [2024-06-10 11:36:36.817422] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:25:53.062 [2024-06-10 11:36:36.817477] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:25:53.062 [2024-06-10 11:36:36.817492] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:25:53.062 #38 NEW cov: 12172 ft: 15055 corp: 22/1409b lim: 90 exec/s: 38 rss: 73Mb L: 87/90 MS: 1 InsertByte- 00:25:53.062 [2024-06-10 11:36:36.867593] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:25:53.062 [2024-06-10 11:36:36.867619] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:53.062 [2024-06-10 11:36:36.867674] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:25:53.062 [2024-06-10 11:36:36.867690] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:53.062 [2024-06-10 11:36:36.867744] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:25:53.062 [2024-06-10 11:36:36.867758] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:25:53.062 [2024-06-10 11:36:36.867813] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:25:53.062 [2024-06-10 11:36:36.867829] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:25:53.062 [2024-06-10 11:36:36.867883] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:4 nsid:0 00:25:53.062 [2024-06-10 11:36:36.867900] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:25:53.062 #39 NEW cov: 12172 ft: 15066 corp: 23/1499b lim: 90 exec/s: 39 rss: 73Mb L: 90/90 MS: 1 ChangeBit- 00:25:53.062 [2024-06-10 11:36:36.917266] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:25:53.062 [2024-06-10 11:36:36.917294] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:53.062 [2024-06-10 11:36:36.917331] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:25:53.062 [2024-06-10 11:36:36.917347] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:53.062 #42 NEW cov: 12172 ft: 15377 corp: 24/1540b lim: 90 exec/s: 42 rss: 73Mb L: 41/90 MS: 3 CopyPart-InsertByte-CrossOver- 00:25:53.062 [2024-06-10 11:36:36.957698] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:25:53.062 [2024-06-10 11:36:36.957725] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:53.062 [2024-06-10 11:36:36.957790] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:25:53.062 [2024-06-10 11:36:36.957807] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:53.062 [2024-06-10 11:36:36.957859] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:25:53.062 [2024-06-10 11:36:36.957874] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:25:53.062 [2024-06-10 11:36:36.957929] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:25:53.062 [2024-06-10 11:36:36.957947] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:25:53.062 #43 NEW cov: 12172 ft: 15408 corp: 25/1629b lim: 90 exec/s: 43 rss: 74Mb L: 89/90 MS: 1 InsertRepeatedBytes- 00:25:53.062 [2024-06-10 11:36:37.007840] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:25:53.062 [2024-06-10 11:36:37.007866] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:53.062 [2024-06-10 11:36:37.007920] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:25:53.062 [2024-06-10 11:36:37.007937] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:53.062 [2024-06-10 11:36:37.007989] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:25:53.062 [2024-06-10 11:36:37.008005] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:25:53.062 [2024-06-10 11:36:37.008061] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:25:53.062 [2024-06-10 11:36:37.008079] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:25:53.322 #44 NEW cov: 12172 ft: 15416 corp: 26/1715b lim: 90 exec/s: 44 rss: 74Mb L: 86/90 MS: 1 ChangeByte- 00:25:53.322 [2024-06-10 11:36:37.047955] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:25:53.322 [2024-06-10 11:36:37.047983] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:53.322 [2024-06-10 11:36:37.048029] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:25:53.322 [2024-06-10 11:36:37.048044] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:53.322 [2024-06-10 11:36:37.048114] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:25:53.322 [2024-06-10 11:36:37.048131] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:25:53.322 [2024-06-10 11:36:37.048184] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:25:53.322 [2024-06-10 11:36:37.048201] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:25:53.322 #45 NEW cov: 12172 ft: 15470 corp: 27/1795b lim: 90 exec/s: 45 rss: 74Mb L: 80/90 MS: 1 EraseBytes- 00:25:53.322 [2024-06-10 11:36:37.088097] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:25:53.322 [2024-06-10 11:36:37.088125] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:53.322 [2024-06-10 11:36:37.088166] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:25:53.322 [2024-06-10 11:36:37.088182] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:53.322 [2024-06-10 11:36:37.088235] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:25:53.322 [2024-06-10 11:36:37.088257] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:25:53.322 [2024-06-10 11:36:37.088311] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:25:53.322 [2024-06-10 11:36:37.088326] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:25:53.323 #46 NEW cov: 12172 ft: 15479 corp: 28/1881b lim: 90 exec/s: 46 rss: 74Mb L: 86/90 MS: 1 ShuffleBytes- 00:25:53.323 [2024-06-10 11:36:37.127759] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:25:53.323 [2024-06-10 11:36:37.127790] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:53.323 #47 NEW cov: 12172 ft: 15578 corp: 29/1916b lim: 90 exec/s: 47 rss: 74Mb L: 35/90 MS: 1 CopyPart- 00:25:53.323 [2024-06-10 11:36:37.188644] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:25:53.323 [2024-06-10 11:36:37.188675] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:53.323 [2024-06-10 11:36:37.188733] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:25:53.323 [2024-06-10 11:36:37.188752] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:53.323 [2024-06-10 11:36:37.188807] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:25:53.323 [2024-06-10 11:36:37.188825] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:25:53.323 [2024-06-10 11:36:37.188883] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:25:53.323 [2024-06-10 11:36:37.188901] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:25:53.323 [2024-06-10 11:36:37.188955] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:4 nsid:0 00:25:53.323 [2024-06-10 11:36:37.188973] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:25:53.323 #48 NEW cov: 12172 ft: 15599 corp: 30/2006b lim: 90 exec/s: 48 rss: 74Mb L: 90/90 MS: 1 InsertRepeatedBytes- 00:25:53.323 [2024-06-10 11:36:37.248575] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:25:53.323 [2024-06-10 11:36:37.248603] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:53.323 [2024-06-10 11:36:37.248650] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:25:53.323 [2024-06-10 11:36:37.248667] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:53.323 [2024-06-10 11:36:37.248719] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:25:53.323 [2024-06-10 11:36:37.248734] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:25:53.323 [2024-06-10 11:36:37.248788] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:25:53.323 [2024-06-10 11:36:37.248801] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:25:53.582 #49 NEW cov: 12172 ft: 15608 corp: 31/2092b lim: 90 exec/s: 49 rss: 74Mb L: 86/90 MS: 1 CopyPart- 00:25:53.582 [2024-06-10 11:36:37.298206] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:25:53.582 [2024-06-10 11:36:37.298234] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:53.582 #50 NEW cov: 12172 ft: 15649 corp: 32/2114b lim: 90 exec/s: 50 rss: 74Mb L: 22/90 MS: 1 ChangeBinInt- 00:25:53.582 [2024-06-10 11:36:37.338628] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:25:53.582 [2024-06-10 11:36:37.338655] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:53.582 [2024-06-10 11:36:37.338708] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:25:53.582 [2024-06-10 11:36:37.338724] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:53.582 [2024-06-10 11:36:37.338781] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:25:53.582 [2024-06-10 11:36:37.338797] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:25:53.582 #51 NEW cov: 12172 ft: 15662 corp: 33/2172b lim: 90 exec/s: 51 rss: 74Mb L: 58/90 MS: 1 CrossOver- 00:25:53.582 [2024-06-10 11:36:37.388918] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:25:53.582 [2024-06-10 11:36:37.388945] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:53.583 [2024-06-10 11:36:37.388994] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:25:53.583 [2024-06-10 11:36:37.389010] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:53.583 [2024-06-10 11:36:37.389063] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:25:53.583 [2024-06-10 11:36:37.389081] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:25:53.583 [2024-06-10 11:36:37.389136] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:25:53.583 [2024-06-10 11:36:37.389152] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:25:53.583 #52 NEW cov: 12172 ft: 15697 corp: 34/2258b lim: 90 exec/s: 52 rss: 74Mb L: 86/90 MS: 1 ChangeBinInt- 00:25:53.583 [2024-06-10 11:36:37.429005] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:25:53.583 [2024-06-10 11:36:37.429031] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:53.583 [2024-06-10 11:36:37.429095] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:25:53.583 [2024-06-10 11:36:37.429112] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:53.583 [2024-06-10 11:36:37.429167] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:25:53.583 [2024-06-10 11:36:37.429183] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:25:53.583 [2024-06-10 11:36:37.429237] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:25:53.583 [2024-06-10 11:36:37.429257] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:25:53.583 #53 NEW cov: 12172 ft: 15700 corp: 35/2345b lim: 90 exec/s: 53 rss: 74Mb L: 87/90 MS: 1 CrossOver- 00:25:53.583 [2024-06-10 11:36:37.479161] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:25:53.583 [2024-06-10 11:36:37.479186] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:53.583 [2024-06-10 11:36:37.479257] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:25:53.583 [2024-06-10 11:36:37.479274] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:53.583 [2024-06-10 11:36:37.479339] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:25:53.583 [2024-06-10 11:36:37.479356] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:25:53.583 [2024-06-10 11:36:37.479410] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:25:53.583 [2024-06-10 11:36:37.479429] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:25:53.583 [2024-06-10 11:36:37.519090] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:25:53.583 [2024-06-10 11:36:37.519116] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:53.583 [2024-06-10 11:36:37.519174] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:25:53.583 [2024-06-10 11:36:37.519189] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:53.583 [2024-06-10 11:36:37.519250] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:25:53.583 [2024-06-10 11:36:37.519267] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:25:53.843 #60 NEW cov: 12172 ft: 15738 corp: 36/2402b lim: 90 exec/s: 60 rss: 74Mb L: 57/90 MS: 2 ChangeByte-EraseBytes- 00:25:53.843 [2024-06-10 11:36:37.559414] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:25:53.843 [2024-06-10 11:36:37.559441] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:53.843 [2024-06-10 11:36:37.559487] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:25:53.843 [2024-06-10 11:36:37.559503] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:53.843 [2024-06-10 11:36:37.559574] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:25:53.843 [2024-06-10 11:36:37.559591] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:25:53.843 [2024-06-10 11:36:37.559647] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:25:53.843 [2024-06-10 11:36:37.559663] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:25:53.843 #61 NEW cov: 12172 ft: 15739 corp: 37/2487b lim: 90 exec/s: 61 rss: 74Mb L: 85/90 MS: 1 CrossOver- 00:25:53.843 [2024-06-10 11:36:37.599505] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:25:53.843 [2024-06-10 11:36:37.599532] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:53.843 [2024-06-10 11:36:37.599595] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:25:53.843 [2024-06-10 11:36:37.599612] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:53.843 [2024-06-10 11:36:37.599667] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:25:53.843 [2024-06-10 11:36:37.599682] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:25:53.843 [2024-06-10 11:36:37.599736] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:25:53.843 [2024-06-10 11:36:37.599753] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:25:53.843 #62 NEW cov: 12172 ft: 15750 corp: 38/2574b lim: 90 exec/s: 31 rss: 74Mb L: 87/90 MS: 1 ChangeByte- 00:25:53.843 #62 DONE cov: 12172 ft: 15750 corp: 38/2574b lim: 90 exec/s: 31 rss: 74Mb 00:25:53.843 ###### Recommended dictionary. ###### 00:25:53.843 "\001\000\000]" # Uses: 3 00:25:53.843 ###### End of recommended dictionary. ###### 00:25:53.843 Done 62 runs in 2 second(s) 00:25:53.843 11:36:37 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_20.conf /var/tmp/suppress_nvmf_fuzz 00:25:53.843 11:36:37 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:25:53.843 11:36:37 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:25:53.843 11:36:37 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 21 1 0x1 00:25:53.843 11:36:37 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=21 00:25:53.843 11:36:37 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:25:53.843 11:36:37 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:25:53.843 11:36:37 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 00:25:53.843 11:36:37 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_21.conf 00:25:53.843 11:36:37 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:25:53.843 11:36:37 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:25:53.843 11:36:37 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 21 00:25:53.843 11:36:37 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4421 00:25:53.843 11:36:37 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 00:25:53.843 11:36:37 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4421' 00:25:53.843 11:36:37 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4421"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:25:53.843 11:36:37 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:25:53.843 11:36:37 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:25:53.843 11:36:37 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4421' -c /tmp/fuzz_json_21.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 -Z 21 00:25:54.103 [2024-06-10 11:36:37.813670] Starting SPDK v24.09-pre git sha1 a3f6419f1 / DPDK 24.03.0 initialization... 00:25:54.103 [2024-06-10 11:36:37.813751] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3336473 ] 00:25:54.103 EAL: No free 2048 kB hugepages reported on node 1 00:25:54.362 [2024-06-10 11:36:38.142333] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:54.362 [2024-06-10 11:36:38.242709] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:25:54.362 [2024-06-10 11:36:38.302961] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:54.621 [2024-06-10 11:36:38.319223] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4421 *** 00:25:54.621 INFO: Running with entropic power schedule (0xFF, 100). 00:25:54.621 INFO: Seed: 4055256621 00:25:54.621 INFO: Loaded 1 modules (357716 inline 8-bit counters): 357716 [0x29a784c, 0x29feda0), 00:25:54.621 INFO: Loaded 1 PC tables (357716 PCs): 357716 [0x29feda0,0x2f742e0), 00:25:54.621 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 00:25:54.621 INFO: A corpus is not provided, starting from an empty corpus 00:25:54.621 #2 INITED exec/s: 0 rss: 65Mb 00:25:54.621 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:25:54.621 This may also happen if the target rejected all inputs we tried so far 00:25:54.621 [2024-06-10 11:36:38.374998] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:25:54.621 [2024-06-10 11:36:38.375033] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:54.621 [2024-06-10 11:36:38.375088] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:25:54.621 [2024-06-10 11:36:38.375105] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:54.621 [2024-06-10 11:36:38.375165] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:25:54.621 [2024-06-10 11:36:38.375182] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:25:54.621 [2024-06-10 11:36:38.375235] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:25:54.621 [2024-06-10 11:36:38.375257] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:25:54.621 [2024-06-10 11:36:38.375315] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:4 nsid:0 00:25:54.621 [2024-06-10 11:36:38.375332] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:25:54.880 NEW_FUNC[1/688]: 0x4a8350 in fuzz_nvm_reservation_release_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:623 00:25:54.880 NEW_FUNC[2/688]: 0x4bf580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:25:54.880 #9 NEW cov: 11903 ft: 11902 corp: 2/51b lim: 50 exec/s: 0 rss: 72Mb L: 50/50 MS: 2 ShuffleBytes-InsertRepeatedBytes- 00:25:54.880 [2024-06-10 11:36:38.715879] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:25:54.880 [2024-06-10 11:36:38.715921] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:54.880 [2024-06-10 11:36:38.715982] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:25:54.880 [2024-06-10 11:36:38.716000] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:54.880 [2024-06-10 11:36:38.716057] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:25:54.880 [2024-06-10 11:36:38.716071] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:25:54.880 [2024-06-10 11:36:38.716129] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:25:54.880 [2024-06-10 11:36:38.716144] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:25:54.880 [2024-06-10 11:36:38.716201] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:4 nsid:0 00:25:54.880 [2024-06-10 11:36:38.716217] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:25:54.880 #10 NEW cov: 12033 ft: 12452 corp: 3/101b lim: 50 exec/s: 0 rss: 72Mb L: 50/50 MS: 1 ShuffleBytes- 00:25:54.880 [2024-06-10 11:36:38.765660] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:25:54.880 [2024-06-10 11:36:38.765689] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:54.880 [2024-06-10 11:36:38.765727] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:25:54.880 [2024-06-10 11:36:38.765745] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:54.880 [2024-06-10 11:36:38.765804] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:25:54.880 [2024-06-10 11:36:38.765822] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:25:54.880 #11 NEW cov: 12039 ft: 13077 corp: 4/131b lim: 50 exec/s: 0 rss: 72Mb L: 30/50 MS: 1 EraseBytes- 00:25:54.880 [2024-06-10 11:36:38.805770] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:25:54.880 [2024-06-10 11:36:38.805797] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:54.880 [2024-06-10 11:36:38.805842] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:25:54.880 [2024-06-10 11:36:38.805858] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:54.880 [2024-06-10 11:36:38.805915] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:25:54.880 [2024-06-10 11:36:38.805932] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:25:55.140 #13 NEW cov: 12124 ft: 13291 corp: 5/162b lim: 50 exec/s: 0 rss: 72Mb L: 31/50 MS: 2 ChangeBinInt-CrossOver- 00:25:55.140 [2024-06-10 11:36:38.845908] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:25:55.140 [2024-06-10 11:36:38.845937] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:55.140 [2024-06-10 11:36:38.845978] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:25:55.140 [2024-06-10 11:36:38.845996] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:55.140 [2024-06-10 11:36:38.846055] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:25:55.140 [2024-06-10 11:36:38.846072] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:25:55.140 #14 NEW cov: 12124 ft: 13502 corp: 6/193b lim: 50 exec/s: 0 rss: 72Mb L: 31/50 MS: 1 ChangeBit- 00:25:55.140 [2024-06-10 11:36:38.896012] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:25:55.140 [2024-06-10 11:36:38.896041] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:55.140 [2024-06-10 11:36:38.896079] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:25:55.140 [2024-06-10 11:36:38.896095] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:55.140 [2024-06-10 11:36:38.896154] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:25:55.140 [2024-06-10 11:36:38.896171] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:25:55.140 #15 NEW cov: 12124 ft: 13687 corp: 7/231b lim: 50 exec/s: 0 rss: 72Mb L: 38/50 MS: 1 CopyPart- 00:25:55.140 [2024-06-10 11:36:38.936444] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:25:55.140 [2024-06-10 11:36:38.936472] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:55.140 [2024-06-10 11:36:38.936526] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:25:55.140 [2024-06-10 11:36:38.936542] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:55.140 [2024-06-10 11:36:38.936600] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:25:55.140 [2024-06-10 11:36:38.936615] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:25:55.141 [2024-06-10 11:36:38.936687] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:25:55.141 [2024-06-10 11:36:38.936704] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:25:55.141 [2024-06-10 11:36:38.936761] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:4 nsid:0 00:25:55.141 [2024-06-10 11:36:38.936777] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:25:55.141 #16 NEW cov: 12124 ft: 13764 corp: 8/281b lim: 50 exec/s: 0 rss: 72Mb L: 50/50 MS: 1 ShuffleBytes- 00:25:55.141 [2024-06-10 11:36:38.976409] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:25:55.141 [2024-06-10 11:36:38.976437] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:55.141 [2024-06-10 11:36:38.976488] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:25:55.141 [2024-06-10 11:36:38.976504] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:55.141 [2024-06-10 11:36:38.976577] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:25:55.141 [2024-06-10 11:36:38.976594] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:25:55.141 [2024-06-10 11:36:38.976654] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:25:55.141 [2024-06-10 11:36:38.976670] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:25:55.141 #17 NEW cov: 12124 ft: 13869 corp: 9/323b lim: 50 exec/s: 0 rss: 72Mb L: 42/50 MS: 1 CrossOver- 00:25:55.141 [2024-06-10 11:36:39.016350] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:25:55.141 [2024-06-10 11:36:39.016379] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:55.141 [2024-06-10 11:36:39.016420] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:25:55.141 [2024-06-10 11:36:39.016435] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:55.141 [2024-06-10 11:36:39.016492] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:25:55.141 [2024-06-10 11:36:39.016509] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:25:55.141 #18 NEW cov: 12124 ft: 13945 corp: 10/354b lim: 50 exec/s: 0 rss: 72Mb L: 31/50 MS: 1 ChangeBinInt- 00:25:55.141 [2024-06-10 11:36:39.066502] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:25:55.141 [2024-06-10 11:36:39.066530] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:55.141 [2024-06-10 11:36:39.066576] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:25:55.141 [2024-06-10 11:36:39.066593] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:55.141 [2024-06-10 11:36:39.066650] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:25:55.141 [2024-06-10 11:36:39.066666] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:25:55.141 #19 NEW cov: 12124 ft: 13987 corp: 11/385b lim: 50 exec/s: 0 rss: 72Mb L: 31/50 MS: 1 ChangeBit- 00:25:55.400 [2024-06-10 11:36:39.106805] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:25:55.400 [2024-06-10 11:36:39.106833] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:55.400 [2024-06-10 11:36:39.106882] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:25:55.400 [2024-06-10 11:36:39.106899] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:55.400 [2024-06-10 11:36:39.106954] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:25:55.400 [2024-06-10 11:36:39.106976] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:25:55.400 [2024-06-10 11:36:39.107036] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:25:55.400 [2024-06-10 11:36:39.107051] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:25:55.400 #20 NEW cov: 12124 ft: 14027 corp: 12/427b lim: 50 exec/s: 0 rss: 72Mb L: 42/50 MS: 1 ChangeBinInt- 00:25:55.400 [2024-06-10 11:36:39.156786] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:25:55.400 [2024-06-10 11:36:39.156814] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:55.400 [2024-06-10 11:36:39.156881] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:25:55.400 [2024-06-10 11:36:39.156899] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:55.400 [2024-06-10 11:36:39.156958] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:25:55.400 [2024-06-10 11:36:39.156975] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:25:55.400 #21 NEW cov: 12124 ft: 14075 corp: 13/466b lim: 50 exec/s: 0 rss: 73Mb L: 39/50 MS: 1 InsertByte- 00:25:55.400 [2024-06-10 11:36:39.207256] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:25:55.400 [2024-06-10 11:36:39.207286] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:55.400 [2024-06-10 11:36:39.207336] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:25:55.400 [2024-06-10 11:36:39.207353] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:55.400 [2024-06-10 11:36:39.207410] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:25:55.400 [2024-06-10 11:36:39.207427] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:25:55.400 [2024-06-10 11:36:39.207481] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:25:55.400 [2024-06-10 11:36:39.207498] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:25:55.400 [2024-06-10 11:36:39.207557] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:4 nsid:0 00:25:55.400 [2024-06-10 11:36:39.207574] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:25:55.400 #22 NEW cov: 12124 ft: 14127 corp: 14/516b lim: 50 exec/s: 0 rss: 73Mb L: 50/50 MS: 1 ChangeBit- 00:25:55.400 [2024-06-10 11:36:39.247354] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:25:55.400 [2024-06-10 11:36:39.247383] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:55.400 [2024-06-10 11:36:39.247440] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:25:55.400 [2024-06-10 11:36:39.247456] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:55.400 [2024-06-10 11:36:39.247530] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:25:55.400 [2024-06-10 11:36:39.247544] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:25:55.400 [2024-06-10 11:36:39.247607] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:25:55.400 [2024-06-10 11:36:39.247624] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:25:55.400 [2024-06-10 11:36:39.247684] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:4 nsid:0 00:25:55.400 [2024-06-10 11:36:39.247699] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:25:55.400 NEW_FUNC[1/1]: 0x1a77150 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:601 00:25:55.400 #23 NEW cov: 12147 ft: 14173 corp: 15/566b lim: 50 exec/s: 0 rss: 73Mb L: 50/50 MS: 1 CrossOver- 00:25:55.400 [2024-06-10 11:36:39.297357] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:25:55.400 [2024-06-10 11:36:39.297384] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:55.400 [2024-06-10 11:36:39.297436] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:25:55.400 [2024-06-10 11:36:39.297452] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:55.400 [2024-06-10 11:36:39.297507] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:25:55.400 [2024-06-10 11:36:39.297523] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:25:55.400 [2024-06-10 11:36:39.297580] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:25:55.400 [2024-06-10 11:36:39.297596] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:25:55.400 #24 NEW cov: 12147 ft: 14186 corp: 16/615b lim: 50 exec/s: 0 rss: 73Mb L: 49/50 MS: 1 InsertRepeatedBytes- 00:25:55.400 [2024-06-10 11:36:39.347474] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:25:55.400 [2024-06-10 11:36:39.347504] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:55.400 [2024-06-10 11:36:39.347555] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:25:55.400 [2024-06-10 11:36:39.347573] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:55.400 [2024-06-10 11:36:39.347631] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:25:55.400 [2024-06-10 11:36:39.347648] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:25:55.400 [2024-06-10 11:36:39.347706] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:25:55.400 [2024-06-10 11:36:39.347723] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:25:55.658 #25 NEW cov: 12147 ft: 14191 corp: 17/662b lim: 50 exec/s: 25 rss: 73Mb L: 47/50 MS: 1 CopyPart- 00:25:55.658 [2024-06-10 11:36:39.397634] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:25:55.658 [2024-06-10 11:36:39.397663] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:55.658 [2024-06-10 11:36:39.397706] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:25:55.658 [2024-06-10 11:36:39.397721] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:55.658 [2024-06-10 11:36:39.397779] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:25:55.658 [2024-06-10 11:36:39.397799] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:25:55.658 [2024-06-10 11:36:39.397857] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:25:55.658 [2024-06-10 11:36:39.397872] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:25:55.658 #26 NEW cov: 12147 ft: 14226 corp: 18/711b lim: 50 exec/s: 26 rss: 73Mb L: 49/50 MS: 1 EraseBytes- 00:25:55.658 [2024-06-10 11:36:39.447949] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:25:55.658 [2024-06-10 11:36:39.447978] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:55.658 [2024-06-10 11:36:39.448035] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:25:55.658 [2024-06-10 11:36:39.448053] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:55.658 [2024-06-10 11:36:39.448111] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:25:55.658 [2024-06-10 11:36:39.448126] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:25:55.659 [2024-06-10 11:36:39.448186] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:25:55.659 [2024-06-10 11:36:39.448204] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:25:55.659 [2024-06-10 11:36:39.448263] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:4 nsid:0 00:25:55.659 [2024-06-10 11:36:39.448279] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:25:55.659 #27 NEW cov: 12147 ft: 14272 corp: 19/761b lim: 50 exec/s: 27 rss: 73Mb L: 50/50 MS: 1 CopyPart- 00:25:55.659 [2024-06-10 11:36:39.497918] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:25:55.659 [2024-06-10 11:36:39.497947] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:55.659 [2024-06-10 11:36:39.497989] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:25:55.659 [2024-06-10 11:36:39.498007] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:55.659 [2024-06-10 11:36:39.498080] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:25:55.659 [2024-06-10 11:36:39.498097] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:25:55.659 [2024-06-10 11:36:39.498157] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:25:55.659 [2024-06-10 11:36:39.498172] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:25:55.659 #28 NEW cov: 12147 ft: 14314 corp: 20/808b lim: 50 exec/s: 28 rss: 73Mb L: 47/50 MS: 1 ShuffleBytes- 00:25:55.659 [2024-06-10 11:36:39.548195] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:25:55.659 [2024-06-10 11:36:39.548223] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:55.659 [2024-06-10 11:36:39.548285] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:25:55.659 [2024-06-10 11:36:39.548302] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:55.659 [2024-06-10 11:36:39.548359] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:25:55.659 [2024-06-10 11:36:39.548379] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:25:55.659 [2024-06-10 11:36:39.548437] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:25:55.659 [2024-06-10 11:36:39.548453] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:25:55.659 [2024-06-10 11:36:39.548510] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:4 nsid:0 00:25:55.659 [2024-06-10 11:36:39.548528] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:25:55.659 #29 NEW cov: 12147 ft: 14320 corp: 21/858b lim: 50 exec/s: 29 rss: 73Mb L: 50/50 MS: 1 CopyPart- 00:25:55.659 [2024-06-10 11:36:39.588143] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:25:55.659 [2024-06-10 11:36:39.588171] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:55.659 [2024-06-10 11:36:39.588221] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:25:55.659 [2024-06-10 11:36:39.588238] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:55.659 [2024-06-10 11:36:39.588300] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:25:55.659 [2024-06-10 11:36:39.588316] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:25:55.659 [2024-06-10 11:36:39.588372] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:25:55.659 [2024-06-10 11:36:39.588387] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:25:55.917 #30 NEW cov: 12147 ft: 14330 corp: 22/907b lim: 50 exec/s: 30 rss: 73Mb L: 49/50 MS: 1 CopyPart- 00:25:55.917 [2024-06-10 11:36:39.637852] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:25:55.917 [2024-06-10 11:36:39.637879] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:55.917 #31 NEW cov: 12147 ft: 15200 corp: 23/926b lim: 50 exec/s: 31 rss: 73Mb L: 19/50 MS: 1 EraseBytes- 00:25:55.917 [2024-06-10 11:36:39.678589] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:25:55.917 [2024-06-10 11:36:39.678619] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:55.917 [2024-06-10 11:36:39.678682] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:25:55.917 [2024-06-10 11:36:39.678699] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:55.917 [2024-06-10 11:36:39.678758] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:25:55.917 [2024-06-10 11:36:39.678774] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:25:55.917 [2024-06-10 11:36:39.678834] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:25:55.917 [2024-06-10 11:36:39.678851] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:25:55.917 [2024-06-10 11:36:39.678910] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:4 nsid:0 00:25:55.917 [2024-06-10 11:36:39.678926] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:25:55.917 #32 NEW cov: 12147 ft: 15239 corp: 24/976b lim: 50 exec/s: 32 rss: 73Mb L: 50/50 MS: 1 CopyPart- 00:25:55.917 [2024-06-10 11:36:39.718365] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:25:55.917 [2024-06-10 11:36:39.718393] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:55.917 [2024-06-10 11:36:39.718450] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:25:55.917 [2024-06-10 11:36:39.718468] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:55.917 [2024-06-10 11:36:39.718529] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:25:55.917 [2024-06-10 11:36:39.718545] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:25:55.917 #36 NEW cov: 12147 ft: 15245 corp: 25/1012b lim: 50 exec/s: 36 rss: 73Mb L: 36/50 MS: 4 ShuffleBytes-ChangeBit-CopyPart-InsertRepeatedBytes- 00:25:55.917 [2024-06-10 11:36:39.758488] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:25:55.917 [2024-06-10 11:36:39.758518] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:55.917 [2024-06-10 11:36:39.758559] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:25:55.917 [2024-06-10 11:36:39.758576] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:55.917 [2024-06-10 11:36:39.758637] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:25:55.917 [2024-06-10 11:36:39.758654] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:25:55.917 #37 NEW cov: 12147 ft: 15268 corp: 26/1043b lim: 50 exec/s: 37 rss: 73Mb L: 31/50 MS: 1 CMP- DE: "\015\000\000\000"- 00:25:55.917 [2024-06-10 11:36:39.798923] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:25:55.917 [2024-06-10 11:36:39.798950] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:55.917 [2024-06-10 11:36:39.799020] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:25:55.917 [2024-06-10 11:36:39.799037] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:55.917 [2024-06-10 11:36:39.799096] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:25:55.917 [2024-06-10 11:36:39.799112] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:25:55.917 [2024-06-10 11:36:39.799168] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:25:55.917 [2024-06-10 11:36:39.799183] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:25:55.917 [2024-06-10 11:36:39.799239] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:4 nsid:0 00:25:55.917 [2024-06-10 11:36:39.799263] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:25:55.917 #38 NEW cov: 12147 ft: 15299 corp: 27/1093b lim: 50 exec/s: 38 rss: 73Mb L: 50/50 MS: 1 ChangeBinInt- 00:25:55.917 [2024-06-10 11:36:39.838933] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:25:55.917 [2024-06-10 11:36:39.838961] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:55.917 [2024-06-10 11:36:39.839020] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:25:55.917 [2024-06-10 11:36:39.839040] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:55.917 [2024-06-10 11:36:39.839098] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:25:55.917 [2024-06-10 11:36:39.839112] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:25:55.917 [2024-06-10 11:36:39.839171] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:25:55.917 [2024-06-10 11:36:39.839188] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:25:56.176 #39 NEW cov: 12147 ft: 15304 corp: 28/1142b lim: 50 exec/s: 39 rss: 73Mb L: 49/50 MS: 1 ChangeBinInt- 00:25:56.176 [2024-06-10 11:36:39.889162] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:25:56.176 [2024-06-10 11:36:39.889190] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:56.176 [2024-06-10 11:36:39.889266] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:25:56.176 [2024-06-10 11:36:39.889283] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:56.176 [2024-06-10 11:36:39.889343] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:25:56.176 [2024-06-10 11:36:39.889360] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:25:56.176 [2024-06-10 11:36:39.889415] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:25:56.176 [2024-06-10 11:36:39.889432] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:25:56.176 [2024-06-10 11:36:39.889493] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:4 nsid:0 00:25:56.176 [2024-06-10 11:36:39.889510] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:25:56.176 #40 NEW cov: 12147 ft: 15317 corp: 29/1192b lim: 50 exec/s: 40 rss: 73Mb L: 50/50 MS: 1 ChangeBinInt- 00:25:56.176 [2024-06-10 11:36:39.929282] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:25:56.176 [2024-06-10 11:36:39.929310] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:56.176 [2024-06-10 11:36:39.929375] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:25:56.176 [2024-06-10 11:36:39.929393] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:56.176 [2024-06-10 11:36:39.929452] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:25:56.176 [2024-06-10 11:36:39.929467] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:25:56.176 [2024-06-10 11:36:39.929524] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:25:56.176 [2024-06-10 11:36:39.929540] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:25:56.176 [2024-06-10 11:36:39.929600] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:4 nsid:0 00:25:56.176 [2024-06-10 11:36:39.929617] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:25:56.176 #41 NEW cov: 12147 ft: 15322 corp: 30/1242b lim: 50 exec/s: 41 rss: 73Mb L: 50/50 MS: 1 ChangeByte- 00:25:56.176 [2024-06-10 11:36:39.979129] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:25:56.176 [2024-06-10 11:36:39.979161] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:56.176 [2024-06-10 11:36:39.979202] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:25:56.176 [2024-06-10 11:36:39.979219] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:56.176 [2024-06-10 11:36:39.979282] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:25:56.176 [2024-06-10 11:36:39.979298] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:25:56.176 #42 NEW cov: 12147 ft: 15326 corp: 31/1273b lim: 50 exec/s: 42 rss: 74Mb L: 31/50 MS: 1 CopyPart- 00:25:56.176 [2024-06-10 11:36:40.028977] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:25:56.176 [2024-06-10 11:36:40.029007] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:56.176 #43 NEW cov: 12147 ft: 15395 corp: 32/1289b lim: 50 exec/s: 43 rss: 74Mb L: 16/50 MS: 1 EraseBytes- 00:25:56.176 [2024-06-10 11:36:40.089435] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:25:56.176 [2024-06-10 11:36:40.089468] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:56.176 [2024-06-10 11:36:40.089530] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:25:56.176 [2024-06-10 11:36:40.089547] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:56.176 [2024-06-10 11:36:40.089608] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:25:56.176 [2024-06-10 11:36:40.089625] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:25:56.176 #44 NEW cov: 12147 ft: 15432 corp: 33/1323b lim: 50 exec/s: 44 rss: 74Mb L: 34/50 MS: 1 PersAutoDict- DE: "\015\000\000\000"- 00:25:56.435 [2024-06-10 11:36:40.139904] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:25:56.435 [2024-06-10 11:36:40.139934] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:56.435 [2024-06-10 11:36:40.139983] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:25:56.435 [2024-06-10 11:36:40.140000] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:56.435 [2024-06-10 11:36:40.140060] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:25:56.435 [2024-06-10 11:36:40.140077] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:25:56.435 [2024-06-10 11:36:40.140136] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:25:56.435 [2024-06-10 11:36:40.140153] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:25:56.435 [2024-06-10 11:36:40.140210] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:4 nsid:0 00:25:56.435 [2024-06-10 11:36:40.140226] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:25:56.435 #45 NEW cov: 12147 ft: 15469 corp: 34/1373b lim: 50 exec/s: 45 rss: 74Mb L: 50/50 MS: 1 ChangeBinInt- 00:25:56.435 [2024-06-10 11:36:40.179993] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:25:56.435 [2024-06-10 11:36:40.180022] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:56.435 [2024-06-10 11:36:40.180066] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:25:56.435 [2024-06-10 11:36:40.180083] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:56.435 [2024-06-10 11:36:40.180142] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:25:56.435 [2024-06-10 11:36:40.180156] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:25:56.435 [2024-06-10 11:36:40.180216] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:25:56.435 [2024-06-10 11:36:40.180233] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:25:56.435 [2024-06-10 11:36:40.180307] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:4 nsid:0 00:25:56.435 [2024-06-10 11:36:40.180330] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:25:56.435 #46 NEW cov: 12147 ft: 15478 corp: 35/1423b lim: 50 exec/s: 46 rss: 74Mb L: 50/50 MS: 1 ChangeBinInt- 00:25:56.435 [2024-06-10 11:36:40.229472] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:25:56.435 [2024-06-10 11:36:40.229501] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:56.435 #47 NEW cov: 12147 ft: 15508 corp: 36/1438b lim: 50 exec/s: 47 rss: 74Mb L: 15/50 MS: 1 EraseBytes- 00:25:56.435 [2024-06-10 11:36:40.280226] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:25:56.435 [2024-06-10 11:36:40.280257] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:56.436 [2024-06-10 11:36:40.280326] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:25:56.436 [2024-06-10 11:36:40.280342] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:56.436 [2024-06-10 11:36:40.280400] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:25:56.436 [2024-06-10 11:36:40.280414] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:25:56.436 [2024-06-10 11:36:40.280475] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:25:56.436 [2024-06-10 11:36:40.280506] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:25:56.436 [2024-06-10 11:36:40.280566] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:4 nsid:0 00:25:56.436 [2024-06-10 11:36:40.280582] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:25:56.436 #48 NEW cov: 12147 ft: 15513 corp: 37/1488b lim: 50 exec/s: 48 rss: 74Mb L: 50/50 MS: 1 ChangeASCIIInt- 00:25:56.436 [2024-06-10 11:36:40.330239] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:25:56.436 [2024-06-10 11:36:40.330274] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:56.436 [2024-06-10 11:36:40.330323] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:25:56.436 [2024-06-10 11:36:40.330340] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:56.436 [2024-06-10 11:36:40.330399] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:25:56.436 [2024-06-10 11:36:40.330418] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:25:56.436 [2024-06-10 11:36:40.330477] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:25:56.436 [2024-06-10 11:36:40.330491] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:25:56.436 #49 NEW cov: 12147 ft: 15551 corp: 38/1535b lim: 50 exec/s: 49 rss: 74Mb L: 47/50 MS: 1 PersAutoDict- DE: "\015\000\000\000"- 00:25:56.436 [2024-06-10 11:36:40.370344] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:25:56.436 [2024-06-10 11:36:40.370372] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:56.436 [2024-06-10 11:36:40.370425] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:25:56.436 [2024-06-10 11:36:40.370442] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:56.436 [2024-06-10 11:36:40.370499] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:25:56.436 [2024-06-10 11:36:40.370514] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:25:56.436 [2024-06-10 11:36:40.370571] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:25:56.436 [2024-06-10 11:36:40.370589] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:25:56.695 #50 NEW cov: 12147 ft: 15561 corp: 39/1576b lim: 50 exec/s: 25 rss: 74Mb L: 41/50 MS: 1 CrossOver- 00:25:56.695 #50 DONE cov: 12147 ft: 15561 corp: 39/1576b lim: 50 exec/s: 25 rss: 74Mb 00:25:56.695 ###### Recommended dictionary. ###### 00:25:56.695 "\015\000\000\000" # Uses: 2 00:25:56.695 ###### End of recommended dictionary. ###### 00:25:56.695 Done 50 runs in 2 second(s) 00:25:56.695 11:36:40 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_21.conf /var/tmp/suppress_nvmf_fuzz 00:25:56.695 11:36:40 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:25:56.695 11:36:40 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:25:56.695 11:36:40 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 22 1 0x1 00:25:56.695 11:36:40 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=22 00:25:56.695 11:36:40 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:25:56.695 11:36:40 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:25:56.695 11:36:40 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 00:25:56.695 11:36:40 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_22.conf 00:25:56.695 11:36:40 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:25:56.695 11:36:40 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:25:56.695 11:36:40 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 22 00:25:56.695 11:36:40 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4422 00:25:56.695 11:36:40 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 00:25:56.695 11:36:40 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4422' 00:25:56.695 11:36:40 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4422"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:25:56.695 11:36:40 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:25:56.696 11:36:40 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:25:56.696 11:36:40 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4422' -c /tmp/fuzz_json_22.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 -Z 22 00:25:56.696 [2024-06-10 11:36:40.585746] Starting SPDK v24.09-pre git sha1 a3f6419f1 / DPDK 24.03.0 initialization... 00:25:56.696 [2024-06-10 11:36:40.585831] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3336828 ] 00:25:56.696 EAL: No free 2048 kB hugepages reported on node 1 00:25:57.263 [2024-06-10 11:36:40.918674] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:57.263 [2024-06-10 11:36:41.016414] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:25:57.263 [2024-06-10 11:36:41.076738] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:57.263 [2024-06-10 11:36:41.092976] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4422 *** 00:25:57.263 INFO: Running with entropic power schedule (0xFF, 100). 00:25:57.263 INFO: Seed: 2534275317 00:25:57.263 INFO: Loaded 1 modules (357716 inline 8-bit counters): 357716 [0x29a784c, 0x29feda0), 00:25:57.263 INFO: Loaded 1 PC tables (357716 PCs): 357716 [0x29feda0,0x2f742e0), 00:25:57.263 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 00:25:57.263 INFO: A corpus is not provided, starting from an empty corpus 00:25:57.263 #2 INITED exec/s: 0 rss: 64Mb 00:25:57.263 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:25:57.263 This may also happen if the target rejected all inputs we tried so far 00:25:57.263 [2024-06-10 11:36:41.159767] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:25:57.263 [2024-06-10 11:36:41.159811] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:57.263 [2024-06-10 11:36:41.159935] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:25:57.263 [2024-06-10 11:36:41.159958] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:57.263 [2024-06-10 11:36:41.160073] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:25:57.263 [2024-06-10 11:36:41.160103] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:25:57.831 NEW_FUNC[1/688]: 0x4aa610 in fuzz_nvm_reservation_register_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:644 00:25:57.831 NEW_FUNC[2/688]: 0x4bf580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:25:57.831 #21 NEW cov: 11929 ft: 11921 corp: 2/60b lim: 85 exec/s: 0 rss: 72Mb L: 59/59 MS: 4 ChangeByte-ShuffleBytes-CMP-InsertRepeatedBytes- DE: "G\000\000\000\000\000\000\000"- 00:25:57.831 [2024-06-10 11:36:41.511397] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:25:57.831 [2024-06-10 11:36:41.511463] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:57.831 [2024-06-10 11:36:41.511574] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:25:57.831 [2024-06-10 11:36:41.511605] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:57.831 [2024-06-10 11:36:41.511717] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:25:57.831 [2024-06-10 11:36:41.511749] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:25:57.831 #22 NEW cov: 12059 ft: 12473 corp: 3/119b lim: 85 exec/s: 0 rss: 72Mb L: 59/59 MS: 1 PersAutoDict- DE: "G\000\000\000\000\000\000\000"- 00:25:57.831 [2024-06-10 11:36:41.581481] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:25:57.831 [2024-06-10 11:36:41.581514] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:57.831 [2024-06-10 11:36:41.581598] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:25:57.831 [2024-06-10 11:36:41.581615] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:57.831 [2024-06-10 11:36:41.581705] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:25:57.831 [2024-06-10 11:36:41.581722] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:25:57.831 #23 NEW cov: 12065 ft: 12745 corp: 4/178b lim: 85 exec/s: 0 rss: 72Mb L: 59/59 MS: 1 ChangeByte- 00:25:57.831 [2024-06-10 11:36:41.642126] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:25:57.831 [2024-06-10 11:36:41.642154] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:57.831 [2024-06-10 11:36:41.642242] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:25:57.831 [2024-06-10 11:36:41.642265] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:57.831 [2024-06-10 11:36:41.642364] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:25:57.831 [2024-06-10 11:36:41.642379] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:25:57.831 [2024-06-10 11:36:41.642468] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:25:57.831 [2024-06-10 11:36:41.642484] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:25:57.831 #24 NEW cov: 12150 ft: 13336 corp: 5/250b lim: 85 exec/s: 0 rss: 72Mb L: 72/72 MS: 1 InsertRepeatedBytes- 00:25:57.831 [2024-06-10 11:36:41.691950] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:25:57.831 [2024-06-10 11:36:41.691977] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:57.831 [2024-06-10 11:36:41.692048] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:25:57.831 [2024-06-10 11:36:41.692066] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:57.831 [2024-06-10 11:36:41.692149] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:25:57.831 [2024-06-10 11:36:41.692172] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:25:57.831 #25 NEW cov: 12150 ft: 13410 corp: 6/309b lim: 85 exec/s: 0 rss: 72Mb L: 59/72 MS: 1 CMP- DE: "\001\002"- 00:25:57.831 [2024-06-10 11:36:41.752293] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:25:57.831 [2024-06-10 11:36:41.752319] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:57.831 [2024-06-10 11:36:41.752385] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:25:57.831 [2024-06-10 11:36:41.752403] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:57.832 [2024-06-10 11:36:41.752482] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:25:57.832 [2024-06-10 11:36:41.752500] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:25:57.832 #28 NEW cov: 12150 ft: 13479 corp: 7/363b lim: 85 exec/s: 0 rss: 72Mb L: 54/72 MS: 3 CopyPart-ChangeByte-CrossOver- 00:25:58.090 [2024-06-10 11:36:41.802525] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:25:58.090 [2024-06-10 11:36:41.802554] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:58.090 [2024-06-10 11:36:41.802642] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:25:58.090 [2024-06-10 11:36:41.802663] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:58.090 [2024-06-10 11:36:41.802763] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:25:58.090 [2024-06-10 11:36:41.802782] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:25:58.090 #29 NEW cov: 12150 ft: 13600 corp: 8/423b lim: 85 exec/s: 0 rss: 72Mb L: 60/72 MS: 1 InsertByte- 00:25:58.090 [2024-06-10 11:36:41.852567] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:25:58.090 [2024-06-10 11:36:41.852594] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:58.090 [2024-06-10 11:36:41.852665] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:25:58.090 [2024-06-10 11:36:41.852681] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:58.090 [2024-06-10 11:36:41.852762] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:25:58.090 [2024-06-10 11:36:41.852783] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:25:58.090 #30 NEW cov: 12150 ft: 13708 corp: 9/482b lim: 85 exec/s: 0 rss: 72Mb L: 59/72 MS: 1 ShuffleBytes- 00:25:58.090 [2024-06-10 11:36:41.902844] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:25:58.090 [2024-06-10 11:36:41.902874] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:58.090 [2024-06-10 11:36:41.902953] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:25:58.090 [2024-06-10 11:36:41.902974] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:58.090 [2024-06-10 11:36:41.903048] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:25:58.090 [2024-06-10 11:36:41.903068] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:25:58.090 #31 NEW cov: 12150 ft: 13767 corp: 10/541b lim: 85 exec/s: 0 rss: 72Mb L: 59/72 MS: 1 PersAutoDict- DE: "\001\002"- 00:25:58.091 [2024-06-10 11:36:41.953073] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:25:58.091 [2024-06-10 11:36:41.953099] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:58.091 [2024-06-10 11:36:41.953186] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:25:58.091 [2024-06-10 11:36:41.953207] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:58.091 [2024-06-10 11:36:41.953283] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:25:58.091 [2024-06-10 11:36:41.953300] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:25:58.091 #32 NEW cov: 12150 ft: 13818 corp: 11/600b lim: 85 exec/s: 0 rss: 72Mb L: 59/72 MS: 1 ChangeBit- 00:25:58.091 [2024-06-10 11:36:42.003358] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:25:58.091 [2024-06-10 11:36:42.003384] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:58.091 [2024-06-10 11:36:42.003467] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:25:58.091 [2024-06-10 11:36:42.003484] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:58.091 [2024-06-10 11:36:42.003569] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:25:58.091 [2024-06-10 11:36:42.003587] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:25:58.091 NEW_FUNC[1/1]: 0x1a77150 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:601 00:25:58.091 #33 NEW cov: 12173 ft: 13858 corp: 12/659b lim: 85 exec/s: 0 rss: 73Mb L: 59/72 MS: 1 ShuffleBytes- 00:25:58.348 [2024-06-10 11:36:42.053587] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:25:58.348 [2024-06-10 11:36:42.053615] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:58.348 [2024-06-10 11:36:42.053689] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:25:58.348 [2024-06-10 11:36:42.053705] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:58.348 [2024-06-10 11:36:42.053807] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:25:58.348 [2024-06-10 11:36:42.053826] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:25:58.348 #34 NEW cov: 12173 ft: 13886 corp: 13/718b lim: 85 exec/s: 0 rss: 73Mb L: 59/72 MS: 1 ChangeBinInt- 00:25:58.348 [2024-06-10 11:36:42.114196] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:25:58.348 [2024-06-10 11:36:42.114223] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:58.348 [2024-06-10 11:36:42.114317] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:25:58.348 [2024-06-10 11:36:42.114338] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:58.348 [2024-06-10 11:36:42.114429] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:25:58.348 [2024-06-10 11:36:42.114448] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:25:58.348 [2024-06-10 11:36:42.114537] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:25:58.348 [2024-06-10 11:36:42.114555] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:25:58.348 #35 NEW cov: 12173 ft: 13909 corp: 14/790b lim: 85 exec/s: 35 rss: 73Mb L: 72/72 MS: 1 CopyPart- 00:25:58.348 [2024-06-10 11:36:42.174068] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:25:58.348 [2024-06-10 11:36:42.174094] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:58.348 [2024-06-10 11:36:42.174178] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:25:58.348 [2024-06-10 11:36:42.174196] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:58.348 [2024-06-10 11:36:42.174288] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:25:58.348 [2024-06-10 11:36:42.174309] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:25:58.348 #36 NEW cov: 12173 ft: 13950 corp: 15/849b lim: 85 exec/s: 36 rss: 73Mb L: 59/72 MS: 1 CrossOver- 00:25:58.348 [2024-06-10 11:36:42.234414] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:25:58.348 [2024-06-10 11:36:42.234442] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:58.348 [2024-06-10 11:36:42.234522] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:25:58.348 [2024-06-10 11:36:42.234541] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:58.348 [2024-06-10 11:36:42.234631] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:25:58.348 [2024-06-10 11:36:42.234648] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:25:58.348 #37 NEW cov: 12173 ft: 13968 corp: 16/908b lim: 85 exec/s: 37 rss: 73Mb L: 59/72 MS: 1 ChangeByte- 00:25:58.348 [2024-06-10 11:36:42.284575] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:25:58.348 [2024-06-10 11:36:42.284602] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:58.348 [2024-06-10 11:36:42.284692] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:25:58.348 [2024-06-10 11:36:42.284710] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:58.348 [2024-06-10 11:36:42.284795] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:25:58.348 [2024-06-10 11:36:42.284813] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:25:58.608 #38 NEW cov: 12173 ft: 13989 corp: 17/975b lim: 85 exec/s: 38 rss: 73Mb L: 67/72 MS: 1 CMP- DE: "I\000\000\000\000\000\000\000"- 00:25:58.608 [2024-06-10 11:36:42.344976] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:25:58.608 [2024-06-10 11:36:42.345003] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:58.608 [2024-06-10 11:36:42.345096] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:25:58.608 [2024-06-10 11:36:42.345114] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:58.608 [2024-06-10 11:36:42.345198] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:25:58.608 [2024-06-10 11:36:42.345215] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:25:58.608 #39 NEW cov: 12173 ft: 13996 corp: 18/1034b lim: 85 exec/s: 39 rss: 73Mb L: 59/72 MS: 1 ChangeByte- 00:25:58.608 [2024-06-10 11:36:42.395465] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:25:58.608 [2024-06-10 11:36:42.395496] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:58.608 [2024-06-10 11:36:42.395557] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:25:58.608 [2024-06-10 11:36:42.395575] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:58.608 [2024-06-10 11:36:42.395641] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:25:58.608 [2024-06-10 11:36:42.395660] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:25:58.608 #40 NEW cov: 12173 ft: 14000 corp: 19/1089b lim: 85 exec/s: 40 rss: 73Mb L: 55/72 MS: 1 InsertByte- 00:25:58.608 [2024-06-10 11:36:42.465993] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:25:58.608 [2024-06-10 11:36:42.466022] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:58.608 [2024-06-10 11:36:42.466096] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:25:58.608 [2024-06-10 11:36:42.466117] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:58.608 [2024-06-10 11:36:42.466206] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:25:58.608 [2024-06-10 11:36:42.466224] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:25:58.608 #41 NEW cov: 12173 ft: 14026 corp: 20/1148b lim: 85 exec/s: 41 rss: 73Mb L: 59/72 MS: 1 ChangeBit- 00:25:58.608 [2024-06-10 11:36:42.536783] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:25:58.608 [2024-06-10 11:36:42.536812] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:58.608 [2024-06-10 11:36:42.536882] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:25:58.608 [2024-06-10 11:36:42.536903] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:58.608 [2024-06-10 11:36:42.536990] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:25:58.608 [2024-06-10 11:36:42.537014] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:25:58.608 [2024-06-10 11:36:42.537106] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:25:58.608 [2024-06-10 11:36:42.537124] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:25:58.866 #42 NEW cov: 12173 ft: 14114 corp: 21/1220b lim: 85 exec/s: 42 rss: 73Mb L: 72/72 MS: 1 ChangeByte- 00:25:58.866 [2024-06-10 11:36:42.606408] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:25:58.866 [2024-06-10 11:36:42.606436] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:58.866 [2024-06-10 11:36:42.606492] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:25:58.867 [2024-06-10 11:36:42.606511] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:58.867 [2024-06-10 11:36:42.606613] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:25:58.867 [2024-06-10 11:36:42.606637] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:25:58.867 #43 NEW cov: 12173 ft: 14133 corp: 22/1275b lim: 85 exec/s: 43 rss: 73Mb L: 55/72 MS: 1 ChangeBit- 00:25:58.867 [2024-06-10 11:36:42.676310] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:25:58.867 [2024-06-10 11:36:42.676339] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:58.867 [2024-06-10 11:36:42.676403] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:25:58.867 [2024-06-10 11:36:42.676421] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:58.867 #44 NEW cov: 12173 ft: 14469 corp: 23/1310b lim: 85 exec/s: 44 rss: 74Mb L: 35/72 MS: 1 EraseBytes- 00:25:58.867 [2024-06-10 11:36:42.746877] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:25:58.867 [2024-06-10 11:36:42.746907] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:58.867 [2024-06-10 11:36:42.746982] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:25:58.867 [2024-06-10 11:36:42.746999] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:58.867 [2024-06-10 11:36:42.747086] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:25:58.867 [2024-06-10 11:36:42.747105] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:25:58.867 #45 NEW cov: 12173 ft: 14489 corp: 24/1369b lim: 85 exec/s: 45 rss: 74Mb L: 59/72 MS: 1 CMP- DE: "\232dT\022m\335\016\000"- 00:25:58.867 [2024-06-10 11:36:42.797506] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:25:58.867 [2024-06-10 11:36:42.797535] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:58.867 [2024-06-10 11:36:42.797621] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:25:58.867 [2024-06-10 11:36:42.797638] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:58.867 [2024-06-10 11:36:42.797728] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:25:58.867 [2024-06-10 11:36:42.797748] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:25:58.867 [2024-06-10 11:36:42.797844] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:25:58.867 [2024-06-10 11:36:42.797862] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:25:59.125 #46 NEW cov: 12173 ft: 14499 corp: 25/1441b lim: 85 exec/s: 46 rss: 74Mb L: 72/72 MS: 1 ChangeBinInt- 00:25:59.125 [2024-06-10 11:36:42.857338] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:25:59.125 [2024-06-10 11:36:42.857365] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:59.125 [2024-06-10 11:36:42.857432] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:25:59.125 [2024-06-10 11:36:42.857449] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:59.125 [2024-06-10 11:36:42.857523] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:25:59.125 [2024-06-10 11:36:42.857545] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:25:59.125 #47 NEW cov: 12173 ft: 14515 corp: 26/1500b lim: 85 exec/s: 47 rss: 74Mb L: 59/72 MS: 1 ChangeBinInt- 00:25:59.125 [2024-06-10 11:36:42.907454] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:25:59.125 [2024-06-10 11:36:42.907479] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:59.125 [2024-06-10 11:36:42.907565] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:25:59.125 [2024-06-10 11:36:42.907583] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:59.125 [2024-06-10 11:36:42.907663] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:25:59.125 [2024-06-10 11:36:42.907683] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:25:59.125 #48 NEW cov: 12173 ft: 14543 corp: 27/1559b lim: 85 exec/s: 48 rss: 74Mb L: 59/72 MS: 1 CopyPart- 00:25:59.125 [2024-06-10 11:36:42.967868] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:25:59.125 [2024-06-10 11:36:42.967895] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:59.125 [2024-06-10 11:36:42.967974] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:25:59.125 [2024-06-10 11:36:42.967995] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:59.125 [2024-06-10 11:36:42.968075] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:25:59.125 [2024-06-10 11:36:42.968096] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:25:59.125 #49 NEW cov: 12173 ft: 14557 corp: 28/1618b lim: 85 exec/s: 49 rss: 74Mb L: 59/72 MS: 1 ChangeByte- 00:25:59.125 [2024-06-10 11:36:43.018049] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:25:59.125 [2024-06-10 11:36:43.018076] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:59.125 [2024-06-10 11:36:43.018156] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:25:59.125 [2024-06-10 11:36:43.018174] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:59.125 [2024-06-10 11:36:43.018262] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:25:59.125 [2024-06-10 11:36:43.018281] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:25:59.125 #50 NEW cov: 12173 ft: 14586 corp: 29/1673b lim: 85 exec/s: 50 rss: 74Mb L: 55/72 MS: 1 CMP- DE: "\001\016\335h\264})r"- 00:25:59.125 [2024-06-10 11:36:43.068306] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:25:59.125 [2024-06-10 11:36:43.068333] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:59.125 [2024-06-10 11:36:43.068413] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:25:59.125 [2024-06-10 11:36:43.068433] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:59.125 [2024-06-10 11:36:43.068515] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:25:59.125 [2024-06-10 11:36:43.068536] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:25:59.385 #51 NEW cov: 12173 ft: 14594 corp: 30/1732b lim: 85 exec/s: 51 rss: 74Mb L: 59/72 MS: 1 ChangeBit- 00:25:59.385 [2024-06-10 11:36:43.118854] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:25:59.385 [2024-06-10 11:36:43.118880] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:25:59.385 [2024-06-10 11:36:43.118971] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:25:59.385 [2024-06-10 11:36:43.118989] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:25:59.385 [2024-06-10 11:36:43.119076] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:25:59.385 [2024-06-10 11:36:43.119096] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:25:59.385 [2024-06-10 11:36:43.119204] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:25:59.385 [2024-06-10 11:36:43.119219] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:25:59.385 #52 NEW cov: 12173 ft: 14609 corp: 31/1812b lim: 85 exec/s: 26 rss: 74Mb L: 80/80 MS: 1 InsertRepeatedBytes- 00:25:59.385 #52 DONE cov: 12173 ft: 14609 corp: 31/1812b lim: 85 exec/s: 26 rss: 74Mb 00:25:59.385 ###### Recommended dictionary. ###### 00:25:59.385 "G\000\000\000\000\000\000\000" # Uses: 1 00:25:59.385 "\001\002" # Uses: 1 00:25:59.385 "I\000\000\000\000\000\000\000" # Uses: 0 00:25:59.385 "\232dT\022m\335\016\000" # Uses: 0 00:25:59.385 "\001\016\335h\264})r" # Uses: 0 00:25:59.385 ###### End of recommended dictionary. ###### 00:25:59.385 Done 52 runs in 2 second(s) 00:25:59.385 11:36:43 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_22.conf /var/tmp/suppress_nvmf_fuzz 00:25:59.385 11:36:43 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:25:59.385 11:36:43 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:25:59.385 11:36:43 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 23 1 0x1 00:25:59.385 11:36:43 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=23 00:25:59.385 11:36:43 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:25:59.385 11:36:43 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:25:59.385 11:36:43 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 00:25:59.385 11:36:43 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_23.conf 00:25:59.385 11:36:43 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:25:59.385 11:36:43 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:25:59.385 11:36:43 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 23 00:25:59.385 11:36:43 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4423 00:25:59.385 11:36:43 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 00:25:59.385 11:36:43 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4423' 00:25:59.385 11:36:43 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4423"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:25:59.385 11:36:43 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:25:59.385 11:36:43 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:25:59.385 11:36:43 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4423' -c /tmp/fuzz_json_23.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 -Z 23 00:25:59.644 [2024-06-10 11:36:43.336470] Starting SPDK v24.09-pre git sha1 a3f6419f1 / DPDK 24.03.0 initialization... 00:25:59.644 [2024-06-10 11:36:43.336554] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3337188 ] 00:25:59.644 EAL: No free 2048 kB hugepages reported on node 1 00:25:59.903 [2024-06-10 11:36:43.668152] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:59.903 [2024-06-10 11:36:43.761181] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:25:59.903 [2024-06-10 11:36:43.822008] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:59.903 [2024-06-10 11:36:43.838280] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4423 *** 00:25:59.903 INFO: Running with entropic power schedule (0xFF, 100). 00:25:59.903 INFO: Seed: 983321572 00:26:00.161 INFO: Loaded 1 modules (357716 inline 8-bit counters): 357716 [0x29a784c, 0x29feda0), 00:26:00.161 INFO: Loaded 1 PC tables (357716 PCs): 357716 [0x29feda0,0x2f742e0), 00:26:00.161 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 00:26:00.161 INFO: A corpus is not provided, starting from an empty corpus 00:26:00.161 #2 INITED exec/s: 0 rss: 65Mb 00:26:00.161 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:26:00.161 This may also happen if the target rejected all inputs we tried so far 00:26:00.161 [2024-06-10 11:36:43.893720] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:26:00.161 [2024-06-10 11:36:43.893758] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:26:00.161 [2024-06-10 11:36:43.893812] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:26:00.161 [2024-06-10 11:36:43.893831] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:26:00.161 [2024-06-10 11:36:43.893890] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:26:00.161 [2024-06-10 11:36:43.893906] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:26:00.461 NEW_FUNC[1/687]: 0x4ad840 in fuzz_nvm_reservation_report_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:671 00:26:00.461 NEW_FUNC[2/687]: 0x4bf580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:26:00.461 #3 NEW cov: 11862 ft: 11860 corp: 2/19b lim: 25 exec/s: 0 rss: 72Mb L: 18/18 MS: 1 InsertRepeatedBytes- 00:26:00.461 [2024-06-10 11:36:44.224409] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:26:00.461 [2024-06-10 11:36:44.224454] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:26:00.461 [2024-06-10 11:36:44.224516] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:26:00.461 [2024-06-10 11:36:44.224533] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:26:00.461 #4 NEW cov: 11992 ft: 12812 corp: 3/31b lim: 25 exec/s: 0 rss: 72Mb L: 12/18 MS: 1 InsertRepeatedBytes- 00:26:00.461 [2024-06-10 11:36:44.264681] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:26:00.461 [2024-06-10 11:36:44.264713] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:26:00.461 [2024-06-10 11:36:44.264753] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:26:00.462 [2024-06-10 11:36:44.264771] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:26:00.462 [2024-06-10 11:36:44.264826] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:26:00.462 [2024-06-10 11:36:44.264844] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:26:00.462 [2024-06-10 11:36:44.264903] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:26:00.462 [2024-06-10 11:36:44.264918] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:26:00.462 #5 NEW cov: 11998 ft: 13347 corp: 4/52b lim: 25 exec/s: 0 rss: 72Mb L: 21/21 MS: 1 InsertRepeatedBytes- 00:26:00.462 [2024-06-10 11:36:44.314812] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:26:00.462 [2024-06-10 11:36:44.314843] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:26:00.462 [2024-06-10 11:36:44.314886] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:26:00.462 [2024-06-10 11:36:44.314905] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:26:00.462 [2024-06-10 11:36:44.314963] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:26:00.462 [2024-06-10 11:36:44.314981] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:26:00.462 [2024-06-10 11:36:44.315040] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:26:00.462 [2024-06-10 11:36:44.315056] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:26:00.462 #6 NEW cov: 12083 ft: 13653 corp: 5/74b lim: 25 exec/s: 0 rss: 72Mb L: 22/22 MS: 1 InsertByte- 00:26:00.462 [2024-06-10 11:36:44.364742] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:26:00.462 [2024-06-10 11:36:44.364771] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:26:00.462 [2024-06-10 11:36:44.364813] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:26:00.462 [2024-06-10 11:36:44.364830] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:26:00.728 #7 NEW cov: 12083 ft: 13792 corp: 6/87b lim: 25 exec/s: 0 rss: 72Mb L: 13/22 MS: 1 CrossOver- 00:26:00.728 [2024-06-10 11:36:44.405085] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:26:00.728 [2024-06-10 11:36:44.405115] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:26:00.728 [2024-06-10 11:36:44.405165] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:26:00.728 [2024-06-10 11:36:44.405181] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:26:00.728 [2024-06-10 11:36:44.405239] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:26:00.728 [2024-06-10 11:36:44.405261] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:26:00.728 [2024-06-10 11:36:44.405320] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:26:00.728 [2024-06-10 11:36:44.405337] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:26:00.728 #8 NEW cov: 12083 ft: 13834 corp: 7/110b lim: 25 exec/s: 0 rss: 72Mb L: 23/23 MS: 1 InsertRepeatedBytes- 00:26:00.728 [2024-06-10 11:36:44.445355] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:26:00.728 [2024-06-10 11:36:44.445384] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:26:00.728 [2024-06-10 11:36:44.445433] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:26:00.728 [2024-06-10 11:36:44.445450] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:26:00.728 [2024-06-10 11:36:44.445509] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:26:00.728 [2024-06-10 11:36:44.445527] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:26:00.728 [2024-06-10 11:36:44.445582] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:26:00.728 [2024-06-10 11:36:44.445599] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:26:00.728 [2024-06-10 11:36:44.445656] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:4 nsid:0 00:26:00.728 [2024-06-10 11:36:44.445676] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:26:00.728 #9 NEW cov: 12083 ft: 13937 corp: 8/135b lim: 25 exec/s: 0 rss: 72Mb L: 25/25 MS: 1 CrossOver- 00:26:00.728 [2024-06-10 11:36:44.495415] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:26:00.728 [2024-06-10 11:36:44.495443] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:26:00.728 [2024-06-10 11:36:44.495501] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:26:00.728 [2024-06-10 11:36:44.495518] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:26:00.728 [2024-06-10 11:36:44.495592] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:26:00.728 [2024-06-10 11:36:44.495610] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:26:00.728 [2024-06-10 11:36:44.495668] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:26:00.728 [2024-06-10 11:36:44.495684] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:26:00.728 [2024-06-10 11:36:44.495743] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:4 nsid:0 00:26:00.728 [2024-06-10 11:36:44.495760] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:26:00.728 #10 NEW cov: 12083 ft: 13990 corp: 9/160b lim: 25 exec/s: 0 rss: 72Mb L: 25/25 MS: 1 ChangeBit- 00:26:00.728 [2024-06-10 11:36:44.545201] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:26:00.728 [2024-06-10 11:36:44.545230] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:26:00.728 [2024-06-10 11:36:44.545295] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:26:00.728 [2024-06-10 11:36:44.545313] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:26:00.728 #11 NEW cov: 12083 ft: 14031 corp: 10/174b lim: 25 exec/s: 0 rss: 72Mb L: 14/25 MS: 1 CrossOver- 00:26:00.728 [2024-06-10 11:36:44.585364] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:26:00.728 [2024-06-10 11:36:44.585392] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:26:00.728 [2024-06-10 11:36:44.585444] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:26:00.728 [2024-06-10 11:36:44.585460] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:26:00.728 #12 NEW cov: 12083 ft: 14137 corp: 11/187b lim: 25 exec/s: 0 rss: 73Mb L: 13/25 MS: 1 ChangeByte- 00:26:00.728 [2024-06-10 11:36:44.635506] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:26:00.728 [2024-06-10 11:36:44.635535] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:26:00.728 [2024-06-10 11:36:44.635576] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:26:00.728 [2024-06-10 11:36:44.635593] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:26:00.728 #13 NEW cov: 12083 ft: 14185 corp: 12/200b lim: 25 exec/s: 0 rss: 73Mb L: 13/25 MS: 1 EraseBytes- 00:26:01.002 [2024-06-10 11:36:44.685846] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:26:01.002 [2024-06-10 11:36:44.685878] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:26:01.002 [2024-06-10 11:36:44.685921] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:26:01.002 [2024-06-10 11:36:44.685939] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:26:01.002 [2024-06-10 11:36:44.685998] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:26:01.002 [2024-06-10 11:36:44.686015] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:26:01.002 [2024-06-10 11:36:44.686074] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:26:01.002 [2024-06-10 11:36:44.686089] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:26:01.002 #14 NEW cov: 12083 ft: 14219 corp: 13/221b lim: 25 exec/s: 0 rss: 73Mb L: 21/25 MS: 1 CopyPart- 00:26:01.002 [2024-06-10 11:36:44.725961] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:26:01.002 [2024-06-10 11:36:44.725993] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:26:01.002 [2024-06-10 11:36:44.726032] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:26:01.002 [2024-06-10 11:36:44.726051] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:26:01.002 [2024-06-10 11:36:44.726111] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:26:01.002 [2024-06-10 11:36:44.726130] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:26:01.002 [2024-06-10 11:36:44.726190] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:26:01.002 [2024-06-10 11:36:44.726208] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:26:01.002 #15 NEW cov: 12083 ft: 14241 corp: 14/242b lim: 25 exec/s: 0 rss: 73Mb L: 21/25 MS: 1 CMP- DE: "\377\377\377\377\377\377\377\377"- 00:26:01.002 [2024-06-10 11:36:44.766203] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:26:01.002 [2024-06-10 11:36:44.766231] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:26:01.002 [2024-06-10 11:36:44.766296] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:26:01.002 [2024-06-10 11:36:44.766313] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:26:01.002 [2024-06-10 11:36:44.766388] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:26:01.002 [2024-06-10 11:36:44.766406] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:26:01.002 [2024-06-10 11:36:44.766467] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:26:01.002 [2024-06-10 11:36:44.766483] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:26:01.002 [2024-06-10 11:36:44.766542] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:4 nsid:0 00:26:01.002 [2024-06-10 11:36:44.766558] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:26:01.002 NEW_FUNC[1/1]: 0x1a77150 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:601 00:26:01.003 #16 NEW cov: 12106 ft: 14296 corp: 15/267b lim: 25 exec/s: 0 rss: 73Mb L: 25/25 MS: 1 ShuffleBytes- 00:26:01.003 [2024-06-10 11:36:44.805822] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:26:01.003 [2024-06-10 11:36:44.805851] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:26:01.003 #17 NEW cov: 12106 ft: 14770 corp: 16/274b lim: 25 exec/s: 0 rss: 73Mb L: 7/25 MS: 1 EraseBytes- 00:26:01.003 [2024-06-10 11:36:44.846323] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:26:01.003 [2024-06-10 11:36:44.846354] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:26:01.003 [2024-06-10 11:36:44.846403] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:26:01.003 [2024-06-10 11:36:44.846420] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:26:01.003 [2024-06-10 11:36:44.846479] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:26:01.003 [2024-06-10 11:36:44.846497] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:26:01.003 [2024-06-10 11:36:44.846558] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:26:01.003 [2024-06-10 11:36:44.846577] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:26:01.003 #18 NEW cov: 12106 ft: 14784 corp: 17/295b lim: 25 exec/s: 0 rss: 73Mb L: 21/25 MS: 1 PersAutoDict- DE: "\377\377\377\377\377\377\377\377"- 00:26:01.003 [2024-06-10 11:36:44.886108] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:26:01.003 [2024-06-10 11:36:44.886138] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:26:01.003 #19 NEW cov: 12106 ft: 14840 corp: 18/304b lim: 25 exec/s: 19 rss: 73Mb L: 9/25 MS: 1 EraseBytes- 00:26:01.003 [2024-06-10 11:36:44.926473] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:26:01.003 [2024-06-10 11:36:44.926504] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:26:01.003 [2024-06-10 11:36:44.926546] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:26:01.003 [2024-06-10 11:36:44.926562] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:26:01.003 [2024-06-10 11:36:44.926622] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:26:01.003 [2024-06-10 11:36:44.926640] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:26:01.262 #20 NEW cov: 12106 ft: 14859 corp: 19/320b lim: 25 exec/s: 20 rss: 73Mb L: 16/25 MS: 1 EraseBytes- 00:26:01.262 [2024-06-10 11:36:44.976658] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:26:01.262 [2024-06-10 11:36:44.976686] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:26:01.262 [2024-06-10 11:36:44.976737] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:26:01.262 [2024-06-10 11:36:44.976754] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:26:01.262 [2024-06-10 11:36:44.976815] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:26:01.262 [2024-06-10 11:36:44.976833] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:26:01.262 [2024-06-10 11:36:44.976892] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:26:01.262 [2024-06-10 11:36:44.976913] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:26:01.262 #21 NEW cov: 12106 ft: 14877 corp: 20/343b lim: 25 exec/s: 21 rss: 73Mb L: 23/25 MS: 1 ChangeBinInt- 00:26:01.262 [2024-06-10 11:36:45.026832] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:26:01.262 [2024-06-10 11:36:45.026860] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:26:01.262 [2024-06-10 11:36:45.026910] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:26:01.262 [2024-06-10 11:36:45.026927] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:26:01.262 [2024-06-10 11:36:45.026986] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:26:01.262 [2024-06-10 11:36:45.027002] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:26:01.262 [2024-06-10 11:36:45.027060] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:26:01.262 [2024-06-10 11:36:45.027076] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:26:01.262 #22 NEW cov: 12106 ft: 14914 corp: 21/366b lim: 25 exec/s: 22 rss: 73Mb L: 23/25 MS: 1 PersAutoDict- DE: "\377\377\377\377\377\377\377\377"- 00:26:01.262 [2024-06-10 11:36:45.067110] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:26:01.262 [2024-06-10 11:36:45.067139] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:26:01.262 [2024-06-10 11:36:45.067196] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:26:01.262 [2024-06-10 11:36:45.067213] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:26:01.263 [2024-06-10 11:36:45.067274] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:26:01.263 [2024-06-10 11:36:45.067291] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:26:01.263 [2024-06-10 11:36:45.067362] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:26:01.263 [2024-06-10 11:36:45.067378] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:26:01.263 [2024-06-10 11:36:45.067438] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:4 nsid:0 00:26:01.263 [2024-06-10 11:36:45.067454] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:26:01.263 #23 NEW cov: 12106 ft: 14921 corp: 22/391b lim: 25 exec/s: 23 rss: 73Mb L: 25/25 MS: 1 CrossOver- 00:26:01.263 [2024-06-10 11:36:45.107171] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:26:01.263 [2024-06-10 11:36:45.107202] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:26:01.263 [2024-06-10 11:36:45.107258] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:26:01.263 [2024-06-10 11:36:45.107275] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:26:01.263 [2024-06-10 11:36:45.107335] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:26:01.263 [2024-06-10 11:36:45.107351] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:26:01.263 [2024-06-10 11:36:45.107408] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:26:01.263 [2024-06-10 11:36:45.107428] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:26:01.263 [2024-06-10 11:36:45.107487] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:4 nsid:0 00:26:01.263 [2024-06-10 11:36:45.107504] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:26:01.263 #24 NEW cov: 12106 ft: 14928 corp: 23/416b lim: 25 exec/s: 24 rss: 73Mb L: 25/25 MS: 1 CMP- DE: "\000\000\001\030"- 00:26:01.263 [2024-06-10 11:36:45.156929] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:26:01.263 [2024-06-10 11:36:45.156959] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:26:01.263 [2024-06-10 11:36:45.157014] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:26:01.263 [2024-06-10 11:36:45.157033] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:26:01.263 #25 NEW cov: 12106 ft: 14999 corp: 24/426b lim: 25 exec/s: 25 rss: 73Mb L: 10/25 MS: 1 InsertByte- 00:26:01.263 [2024-06-10 11:36:45.207253] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:26:01.263 [2024-06-10 11:36:45.207283] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:26:01.263 [2024-06-10 11:36:45.207333] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:26:01.263 [2024-06-10 11:36:45.207351] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:26:01.263 [2024-06-10 11:36:45.207412] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:26:01.263 [2024-06-10 11:36:45.207428] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:26:01.521 #26 NEW cov: 12106 ft: 15007 corp: 25/443b lim: 25 exec/s: 26 rss: 73Mb L: 17/25 MS: 1 InsertRepeatedBytes- 00:26:01.521 [2024-06-10 11:36:45.247492] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:26:01.521 [2024-06-10 11:36:45.247523] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:26:01.521 [2024-06-10 11:36:45.247566] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:26:01.521 [2024-06-10 11:36:45.247583] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:26:01.521 [2024-06-10 11:36:45.247641] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:26:01.521 [2024-06-10 11:36:45.247657] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:26:01.521 [2024-06-10 11:36:45.247715] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:26:01.521 [2024-06-10 11:36:45.247731] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:26:01.521 #27 NEW cov: 12106 ft: 15023 corp: 26/464b lim: 25 exec/s: 27 rss: 73Mb L: 21/25 MS: 1 ShuffleBytes- 00:26:01.521 [2024-06-10 11:36:45.297735] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:26:01.521 [2024-06-10 11:36:45.297765] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:26:01.521 [2024-06-10 11:36:45.297816] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:26:01.521 [2024-06-10 11:36:45.297831] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:26:01.521 [2024-06-10 11:36:45.297893] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:26:01.521 [2024-06-10 11:36:45.297912] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:26:01.521 [2024-06-10 11:36:45.297970] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:26:01.521 [2024-06-10 11:36:45.297987] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:26:01.521 [2024-06-10 11:36:45.298061] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:4 nsid:0 00:26:01.521 [2024-06-10 11:36:45.298081] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:26:01.521 #28 NEW cov: 12106 ft: 15048 corp: 27/489b lim: 25 exec/s: 28 rss: 73Mb L: 25/25 MS: 1 ChangeByte- 00:26:01.521 [2024-06-10 11:36:45.347883] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:26:01.521 [2024-06-10 11:36:45.347913] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:26:01.521 [2024-06-10 11:36:45.347962] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:26:01.521 [2024-06-10 11:36:45.347980] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:26:01.521 [2024-06-10 11:36:45.348037] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:26:01.521 [2024-06-10 11:36:45.348052] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:26:01.521 [2024-06-10 11:36:45.348110] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:26:01.521 [2024-06-10 11:36:45.348127] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:26:01.521 [2024-06-10 11:36:45.348185] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:4 nsid:0 00:26:01.521 [2024-06-10 11:36:45.348203] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:26:01.521 #29 NEW cov: 12106 ft: 15086 corp: 28/514b lim: 25 exec/s: 29 rss: 73Mb L: 25/25 MS: 1 ChangeByte- 00:26:01.521 [2024-06-10 11:36:45.387898] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:26:01.521 [2024-06-10 11:36:45.387928] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:26:01.521 [2024-06-10 11:36:45.387976] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:26:01.521 [2024-06-10 11:36:45.387995] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:26:01.521 [2024-06-10 11:36:45.388051] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:26:01.521 [2024-06-10 11:36:45.388071] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:26:01.521 [2024-06-10 11:36:45.388132] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:26:01.521 [2024-06-10 11:36:45.388148] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:26:01.521 #30 NEW cov: 12106 ft: 15096 corp: 29/534b lim: 25 exec/s: 30 rss: 73Mb L: 20/25 MS: 1 InsertRepeatedBytes- 00:26:01.521 [2024-06-10 11:36:45.438019] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:26:01.521 [2024-06-10 11:36:45.438053] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:26:01.521 [2024-06-10 11:36:45.438094] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:26:01.521 [2024-06-10 11:36:45.438110] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:26:01.521 [2024-06-10 11:36:45.438166] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:26:01.521 [2024-06-10 11:36:45.438184] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:26:01.521 [2024-06-10 11:36:45.438250] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:26:01.521 [2024-06-10 11:36:45.438267] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:26:01.780 #31 NEW cov: 12106 ft: 15199 corp: 30/558b lim: 25 exec/s: 31 rss: 73Mb L: 24/25 MS: 1 InsertRepeatedBytes- 00:26:01.780 [2024-06-10 11:36:45.487941] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:26:01.780 [2024-06-10 11:36:45.487970] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:26:01.780 [2024-06-10 11:36:45.488022] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:26:01.780 [2024-06-10 11:36:45.488040] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:26:01.780 #32 NEW cov: 12106 ft: 15207 corp: 31/571b lim: 25 exec/s: 32 rss: 74Mb L: 13/25 MS: 1 ChangeByte- 00:26:01.780 [2024-06-10 11:36:45.538512] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:26:01.780 [2024-06-10 11:36:45.538541] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:26:01.780 [2024-06-10 11:36:45.538598] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:26:01.780 [2024-06-10 11:36:45.538616] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:26:01.780 [2024-06-10 11:36:45.538672] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:26:01.780 [2024-06-10 11:36:45.538689] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:26:01.780 [2024-06-10 11:36:45.538745] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:26:01.780 [2024-06-10 11:36:45.538763] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:26:01.780 [2024-06-10 11:36:45.538819] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:4 nsid:0 00:26:01.780 [2024-06-10 11:36:45.538837] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:26:01.780 #33 NEW cov: 12106 ft: 15209 corp: 32/596b lim: 25 exec/s: 33 rss: 74Mb L: 25/25 MS: 1 ChangeByte- 00:26:01.780 [2024-06-10 11:36:45.588238] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:26:01.780 [2024-06-10 11:36:45.588287] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:26:01.780 [2024-06-10 11:36:45.588328] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:26:01.780 [2024-06-10 11:36:45.588347] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:26:01.780 #34 NEW cov: 12106 ft: 15286 corp: 33/610b lim: 25 exec/s: 34 rss: 74Mb L: 14/25 MS: 1 ChangeByte- 00:26:01.780 [2024-06-10 11:36:45.638723] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:26:01.780 [2024-06-10 11:36:45.638750] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:26:01.780 [2024-06-10 11:36:45.638809] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:26:01.780 [2024-06-10 11:36:45.638827] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:26:01.780 [2024-06-10 11:36:45.638884] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:26:01.780 [2024-06-10 11:36:45.638900] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:26:01.781 [2024-06-10 11:36:45.638955] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:26:01.781 [2024-06-10 11:36:45.638972] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:26:01.781 [2024-06-10 11:36:45.639028] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:4 nsid:0 00:26:01.781 [2024-06-10 11:36:45.639046] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:26:01.781 #35 NEW cov: 12106 ft: 15295 corp: 34/635b lim: 25 exec/s: 35 rss: 74Mb L: 25/25 MS: 1 CopyPart- 00:26:01.781 [2024-06-10 11:36:45.678487] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:26:01.781 [2024-06-10 11:36:45.678515] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:26:01.781 [2024-06-10 11:36:45.678563] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:26:01.781 [2024-06-10 11:36:45.678581] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:26:01.781 #36 NEW cov: 12106 ft: 15306 corp: 35/649b lim: 25 exec/s: 36 rss: 74Mb L: 14/25 MS: 1 InsertByte- 00:26:01.781 [2024-06-10 11:36:45.729010] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:26:01.781 [2024-06-10 11:36:45.729040] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:26:01.781 [2024-06-10 11:36:45.729097] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:26:01.781 [2024-06-10 11:36:45.729115] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:26:01.781 [2024-06-10 11:36:45.729170] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:26:01.781 [2024-06-10 11:36:45.729187] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:26:01.781 [2024-06-10 11:36:45.729251] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:26:01.781 [2024-06-10 11:36:45.729268] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:26:01.781 [2024-06-10 11:36:45.729333] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:4 nsid:0 00:26:01.781 [2024-06-10 11:36:45.729350] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:26:02.040 #37 NEW cov: 12106 ft: 15309 corp: 36/674b lim: 25 exec/s: 37 rss: 74Mb L: 25/25 MS: 1 CrossOver- 00:26:02.040 [2024-06-10 11:36:45.769002] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:26:02.040 [2024-06-10 11:36:45.769032] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:26:02.040 [2024-06-10 11:36:45.769076] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:26:02.040 [2024-06-10 11:36:45.769094] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:26:02.040 [2024-06-10 11:36:45.769155] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:26:02.040 [2024-06-10 11:36:45.769174] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:26:02.040 [2024-06-10 11:36:45.769233] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:26:02.041 [2024-06-10 11:36:45.769253] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:26:02.041 #38 NEW cov: 12106 ft: 15320 corp: 37/694b lim: 25 exec/s: 38 rss: 74Mb L: 20/25 MS: 1 CMP- DE: "\001\000\177\272,\016\375\264"- 00:26:02.041 [2024-06-10 11:36:45.809217] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:26:02.041 [2024-06-10 11:36:45.809251] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:26:02.041 [2024-06-10 11:36:45.809302] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:26:02.041 [2024-06-10 11:36:45.809319] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:26:02.041 [2024-06-10 11:36:45.809377] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:26:02.041 [2024-06-10 11:36:45.809394] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:26:02.041 [2024-06-10 11:36:45.809451] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:26:02.041 [2024-06-10 11:36:45.809468] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:26:02.041 [2024-06-10 11:36:45.809525] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:4 nsid:0 00:26:02.041 [2024-06-10 11:36:45.809542] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:26:02.041 #39 NEW cov: 12106 ft: 15324 corp: 38/719b lim: 25 exec/s: 39 rss: 74Mb L: 25/25 MS: 1 ChangeBit- 00:26:02.041 [2024-06-10 11:36:45.848981] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:26:02.041 [2024-06-10 11:36:45.849011] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:26:02.041 [2024-06-10 11:36:45.849052] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:26:02.041 [2024-06-10 11:36:45.849069] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:26:02.041 #40 NEW cov: 12106 ft: 15427 corp: 39/729b lim: 25 exec/s: 40 rss: 74Mb L: 10/25 MS: 1 EraseBytes- 00:26:02.041 [2024-06-10 11:36:45.888907] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:26:02.041 [2024-06-10 11:36:45.888936] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:26:02.041 #41 NEW cov: 12106 ft: 15465 corp: 40/738b lim: 25 exec/s: 20 rss: 74Mb L: 9/25 MS: 1 ChangeBit- 00:26:02.041 #41 DONE cov: 12106 ft: 15465 corp: 40/738b lim: 25 exec/s: 20 rss: 74Mb 00:26:02.041 ###### Recommended dictionary. ###### 00:26:02.041 "\377\377\377\377\377\377\377\377" # Uses: 2 00:26:02.041 "\000\000\001\030" # Uses: 0 00:26:02.041 "\001\000\177\272,\016\375\264" # Uses: 0 00:26:02.041 ###### End of recommended dictionary. ###### 00:26:02.041 Done 41 runs in 2 second(s) 00:26:02.301 11:36:46 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_23.conf /var/tmp/suppress_nvmf_fuzz 00:26:02.301 11:36:46 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:26:02.301 11:36:46 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:26:02.301 11:36:46 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 24 1 0x1 00:26:02.301 11:36:46 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=24 00:26:02.301 11:36:46 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:26:02.301 11:36:46 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:26:02.301 11:36:46 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 00:26:02.301 11:36:46 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_24.conf 00:26:02.301 11:36:46 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:26:02.301 11:36:46 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:26:02.301 11:36:46 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 24 00:26:02.301 11:36:46 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4424 00:26:02.301 11:36:46 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 00:26:02.301 11:36:46 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4424' 00:26:02.301 11:36:46 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4424"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:26:02.301 11:36:46 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:26:02.301 11:36:46 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:26:02.301 11:36:46 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4424' -c /tmp/fuzz_json_24.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 -Z 24 00:26:02.301 [2024-06-10 11:36:46.102527] Starting SPDK v24.09-pre git sha1 a3f6419f1 / DPDK 24.03.0 initialization... 00:26:02.301 [2024-06-10 11:36:46.102601] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3337555 ] 00:26:02.301 EAL: No free 2048 kB hugepages reported on node 1 00:26:02.560 [2024-06-10 11:36:46.431801] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:02.818 [2024-06-10 11:36:46.533006] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:26:02.818 [2024-06-10 11:36:46.593542] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:02.818 [2024-06-10 11:36:46.609803] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4424 *** 00:26:02.818 INFO: Running with entropic power schedule (0xFF, 100). 00:26:02.818 INFO: Seed: 3754333086 00:26:02.818 INFO: Loaded 1 modules (357716 inline 8-bit counters): 357716 [0x29a784c, 0x29feda0), 00:26:02.818 INFO: Loaded 1 PC tables (357716 PCs): 357716 [0x29feda0,0x2f742e0), 00:26:02.818 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 00:26:02.818 INFO: A corpus is not provided, starting from an empty corpus 00:26:02.818 #2 INITED exec/s: 0 rss: 65Mb 00:26:02.818 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:26:02.818 This may also happen if the target rejected all inputs we tried so far 00:26:02.818 [2024-06-10 11:36:46.681384] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.818 [2024-06-10 11:36:46.681441] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:26:02.818 [2024-06-10 11:36:46.681542] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.819 [2024-06-10 11:36:46.681578] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:26:02.819 [2024-06-10 11:36:46.681714] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.819 [2024-06-10 11:36:46.681752] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:26:02.819 [2024-06-10 11:36:46.681887] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.819 [2024-06-10 11:36:46.681919] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:26:03.077 NEW_FUNC[1/688]: 0x4ae920 in fuzz_nvm_compare_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:685 00:26:03.077 NEW_FUNC[2/688]: 0x4bf580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:26:03.077 #3 NEW cov: 11930 ft: 11931 corp: 2/89b lim: 100 exec/s: 0 rss: 72Mb L: 88/88 MS: 1 InsertRepeatedBytes- 00:26:03.337 [2024-06-10 11:36:47.032256] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.337 [2024-06-10 11:36:47.032300] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:26:03.337 [2024-06-10 11:36:47.032376] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.337 [2024-06-10 11:36:47.032396] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:26:03.337 [2024-06-10 11:36:47.032497] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.337 [2024-06-10 11:36:47.032518] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:26:03.337 [2024-06-10 11:36:47.032606] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.337 [2024-06-10 11:36:47.032626] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:26:03.337 #4 NEW cov: 12064 ft: 12583 corp: 3/188b lim: 100 exec/s: 0 rss: 72Mb L: 99/99 MS: 1 InsertRepeatedBytes- 00:26:03.337 [2024-06-10 11:36:47.101814] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.337 [2024-06-10 11:36:47.101845] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:26:03.337 [2024-06-10 11:36:47.101951] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.337 [2024-06-10 11:36:47.101967] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:26:03.337 #5 NEW cov: 12070 ft: 13308 corp: 4/233b lim: 100 exec/s: 0 rss: 72Mb L: 45/99 MS: 1 EraseBytes- 00:26:03.337 [2024-06-10 11:36:47.152935] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744073709551402 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.337 [2024-06-10 11:36:47.152963] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:26:03.337 [2024-06-10 11:36:47.153057] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.337 [2024-06-10 11:36:47.153077] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:26:03.337 [2024-06-10 11:36:47.153139] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.337 [2024-06-10 11:36:47.153161] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:26:03.337 [2024-06-10 11:36:47.153251] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.337 [2024-06-10 11:36:47.153271] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:26:03.337 [2024-06-10 11:36:47.153373] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:4 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.337 [2024-06-10 11:36:47.153395] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:26:03.337 #6 NEW cov: 12155 ft: 13601 corp: 5/333b lim: 100 exec/s: 0 rss: 72Mb L: 100/100 MS: 1 InsertByte- 00:26:03.337 [2024-06-10 11:36:47.212849] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:3674937295934324735 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.337 [2024-06-10 11:36:47.212877] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:26:03.337 [2024-06-10 11:36:47.212951] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.337 [2024-06-10 11:36:47.212968] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:26:03.337 [2024-06-10 11:36:47.213054] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.337 [2024-06-10 11:36:47.213072] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:26:03.337 [2024-06-10 11:36:47.213163] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.337 [2024-06-10 11:36:47.213179] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:26:03.337 #7 NEW cov: 12155 ft: 13667 corp: 6/421b lim: 100 exec/s: 0 rss: 72Mb L: 88/100 MS: 1 ChangeByte- 00:26:03.337 [2024-06-10 11:36:47.263535] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744073709551402 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.337 [2024-06-10 11:36:47.263565] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:26:03.337 [2024-06-10 11:36:47.263650] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.337 [2024-06-10 11:36:47.263669] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:26:03.337 [2024-06-10 11:36:47.263741] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709551599 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.337 [2024-06-10 11:36:47.263761] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:26:03.337 [2024-06-10 11:36:47.263845] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.337 [2024-06-10 11:36:47.263868] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:26:03.337 [2024-06-10 11:36:47.263959] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:4 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.337 [2024-06-10 11:36:47.263982] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:26:03.596 #8 NEW cov: 12155 ft: 13760 corp: 7/521b lim: 100 exec/s: 0 rss: 72Mb L: 100/100 MS: 1 ChangeBit- 00:26:03.596 [2024-06-10 11:36:47.332768] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.596 [2024-06-10 11:36:47.332797] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:26:03.596 [2024-06-10 11:36:47.332885] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.596 [2024-06-10 11:36:47.332901] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:26:03.597 #14 NEW cov: 12155 ft: 13859 corp: 8/567b lim: 100 exec/s: 0 rss: 72Mb L: 46/100 MS: 1 InsertByte- 00:26:03.597 [2024-06-10 11:36:47.393687] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:3674937295934324735 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.597 [2024-06-10 11:36:47.393718] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:26:03.597 [2024-06-10 11:36:47.393800] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.597 [2024-06-10 11:36:47.393818] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:26:03.597 [2024-06-10 11:36:47.393875] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.597 [2024-06-10 11:36:47.393891] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:26:03.597 [2024-06-10 11:36:47.393988] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.597 [2024-06-10 11:36:47.394006] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:26:03.597 #15 NEW cov: 12155 ft: 13902 corp: 9/655b lim: 100 exec/s: 0 rss: 72Mb L: 88/100 MS: 1 ChangeByte- 00:26:03.597 [2024-06-10 11:36:47.453945] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:3674937295934324735 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.597 [2024-06-10 11:36:47.453973] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:26:03.597 [2024-06-10 11:36:47.454060] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.597 [2024-06-10 11:36:47.454080] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:26:03.597 [2024-06-10 11:36:47.454163] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.597 [2024-06-10 11:36:47.454181] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:26:03.597 [2024-06-10 11:36:47.454279] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.597 [2024-06-10 11:36:47.454303] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:26:03.597 #16 NEW cov: 12155 ft: 13928 corp: 10/743b lim: 100 exec/s: 0 rss: 72Mb L: 88/100 MS: 1 CopyPart- 00:26:03.597 [2024-06-10 11:36:47.504600] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.597 [2024-06-10 11:36:47.504629] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:26:03.597 [2024-06-10 11:36:47.504707] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.597 [2024-06-10 11:36:47.504725] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:26:03.597 [2024-06-10 11:36:47.504787] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.597 [2024-06-10 11:36:47.504805] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:26:03.597 [2024-06-10 11:36:47.504890] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.597 [2024-06-10 11:36:47.504908] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:26:03.597 [2024-06-10 11:36:47.505004] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:4 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.597 [2024-06-10 11:36:47.505023] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:26:03.597 #17 NEW cov: 12155 ft: 14028 corp: 11/843b lim: 100 exec/s: 0 rss: 72Mb L: 100/100 MS: 1 CopyPart- 00:26:03.857 [2024-06-10 11:36:47.554747] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.857 [2024-06-10 11:36:47.554775] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:26:03.857 [2024-06-10 11:36:47.554866] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.857 [2024-06-10 11:36:47.554884] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:26:03.857 [2024-06-10 11:36:47.554972] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.857 [2024-06-10 11:36:47.554993] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:26:03.857 [2024-06-10 11:36:47.555072] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.857 [2024-06-10 11:36:47.555090] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:26:03.857 [2024-06-10 11:36:47.555187] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:4 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.857 [2024-06-10 11:36:47.555209] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:26:03.857 NEW_FUNC[1/1]: 0x1a77150 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:601 00:26:03.857 #18 NEW cov: 12178 ft: 14099 corp: 12/943b lim: 100 exec/s: 0 rss: 73Mb L: 100/100 MS: 1 CrossOver- 00:26:03.857 [2024-06-10 11:36:47.624936] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744073709551402 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.857 [2024-06-10 11:36:47.624963] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:26:03.857 [2024-06-10 11:36:47.625060] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.857 [2024-06-10 11:36:47.625079] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:26:03.857 [2024-06-10 11:36:47.625162] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709551599 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.857 [2024-06-10 11:36:47.625180] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:26:03.857 [2024-06-10 11:36:47.625276] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.857 [2024-06-10 11:36:47.625307] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:26:03.857 [2024-06-10 11:36:47.625410] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:4 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.857 [2024-06-10 11:36:47.625426] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:26:03.857 #19 NEW cov: 12178 ft: 14245 corp: 13/1043b lim: 100 exec/s: 19 rss: 73Mb L: 100/100 MS: 1 ChangeByte- 00:26:03.857 [2024-06-10 11:36:47.684085] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.857 [2024-06-10 11:36:47.684111] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:26:03.857 [2024-06-10 11:36:47.684186] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.857 [2024-06-10 11:36:47.684202] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:26:03.857 #20 NEW cov: 12178 ft: 14254 corp: 14/1089b lim: 100 exec/s: 20 rss: 73Mb L: 46/100 MS: 1 ChangeByte- 00:26:03.857 [2024-06-10 11:36:47.744272] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.857 [2024-06-10 11:36:47.744298] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:26:03.857 [2024-06-10 11:36:47.744369] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.857 [2024-06-10 11:36:47.744386] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:26:03.857 #21 NEW cov: 12178 ft: 14283 corp: 15/1135b lim: 100 exec/s: 21 rss: 73Mb L: 46/100 MS: 1 ChangeByte- 00:26:03.857 [2024-06-10 11:36:47.805389] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.857 [2024-06-10 11:36:47.805417] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:26:03.857 [2024-06-10 11:36:47.805509] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.857 [2024-06-10 11:36:47.805530] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:26:03.857 [2024-06-10 11:36:47.805597] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.857 [2024-06-10 11:36:47.805618] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:26:03.857 [2024-06-10 11:36:47.805713] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.858 [2024-06-10 11:36:47.805733] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:26:04.117 #22 NEW cov: 12178 ft: 14300 corp: 16/1234b lim: 100 exec/s: 22 rss: 73Mb L: 99/100 MS: 1 CMP- DE: "\377\377\377\377\377\377\377\377"- 00:26:04.117 [2024-06-10 11:36:47.855436] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:3674937295934324735 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.117 [2024-06-10 11:36:47.855464] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:26:04.117 [2024-06-10 11:36:47.855580] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.117 [2024-06-10 11:36:47.855600] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:26:04.117 [2024-06-10 11:36:47.855680] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.117 [2024-06-10 11:36:47.855694] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:26:04.117 [2024-06-10 11:36:47.855784] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.117 [2024-06-10 11:36:47.855804] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:26:04.117 #23 NEW cov: 12178 ft: 14314 corp: 17/1322b lim: 100 exec/s: 23 rss: 73Mb L: 88/100 MS: 1 ChangeByte- 00:26:04.117 [2024-06-10 11:36:47.925790] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:3674937295934324735 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.117 [2024-06-10 11:36:47.925819] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:26:04.117 [2024-06-10 11:36:47.925901] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.117 [2024-06-10 11:36:47.925919] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:26:04.117 [2024-06-10 11:36:47.925985] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.117 [2024-06-10 11:36:47.926005] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:26:04.117 [2024-06-10 11:36:47.926103] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.117 [2024-06-10 11:36:47.926122] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:26:04.117 #24 NEW cov: 12178 ft: 14335 corp: 18/1411b lim: 100 exec/s: 24 rss: 73Mb L: 89/100 MS: 1 InsertByte- 00:26:04.117 [2024-06-10 11:36:47.976027] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:3674937295934324735 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.117 [2024-06-10 11:36:47.976059] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:26:04.117 [2024-06-10 11:36:47.976126] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.117 [2024-06-10 11:36:47.976146] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:26:04.117 [2024-06-10 11:36:47.976211] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.117 [2024-06-10 11:36:47.976226] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:26:04.117 [2024-06-10 11:36:47.976326] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.117 [2024-06-10 11:36:47.976345] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:26:04.117 #25 NEW cov: 12178 ft: 14396 corp: 19/1499b lim: 100 exec/s: 25 rss: 73Mb L: 88/100 MS: 1 ChangeBinInt- 00:26:04.117 [2024-06-10 11:36:48.026153] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:3674937295934324735 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.117 [2024-06-10 11:36:48.026181] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:26:04.117 [2024-06-10 11:36:48.026265] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.117 [2024-06-10 11:36:48.026285] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:26:04.117 [2024-06-10 11:36:48.026369] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18428729675200069631 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.117 [2024-06-10 11:36:48.026387] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:26:04.117 [2024-06-10 11:36:48.026483] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.117 [2024-06-10 11:36:48.026503] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:26:04.117 #26 NEW cov: 12178 ft: 14407 corp: 20/1588b lim: 100 exec/s: 26 rss: 73Mb L: 89/100 MS: 1 ChangeBit- 00:26:04.377 [2024-06-10 11:36:48.097006] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744073709551402 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.377 [2024-06-10 11:36:48.097037] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:26:04.377 [2024-06-10 11:36:48.097130] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.377 [2024-06-10 11:36:48.097150] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:26:04.377 [2024-06-10 11:36:48.097235] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.377 [2024-06-10 11:36:48.097261] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:26:04.377 [2024-06-10 11:36:48.097349] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.377 [2024-06-10 11:36:48.097368] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:26:04.377 [2024-06-10 11:36:48.097464] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:4 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.377 [2024-06-10 11:36:48.097480] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:26:04.377 #27 NEW cov: 12178 ft: 14419 corp: 21/1688b lim: 100 exec/s: 27 rss: 73Mb L: 100/100 MS: 1 ShuffleBytes- 00:26:04.377 [2024-06-10 11:36:48.146133] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.377 [2024-06-10 11:36:48.146164] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:26:04.377 [2024-06-10 11:36:48.146251] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.377 [2024-06-10 11:36:48.146270] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:26:04.377 #28 NEW cov: 12178 ft: 14448 corp: 22/1734b lim: 100 exec/s: 28 rss: 73Mb L: 46/100 MS: 1 CMP- DE: "\001\000\000\000"- 00:26:04.377 [2024-06-10 11:36:48.197187] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:3674937295934324735 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.377 [2024-06-10 11:36:48.197214] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:26:04.377 [2024-06-10 11:36:48.197309] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.377 [2024-06-10 11:36:48.197328] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:26:04.377 [2024-06-10 11:36:48.197408] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.377 [2024-06-10 11:36:48.197427] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:26:04.377 [2024-06-10 11:36:48.197525] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.377 [2024-06-10 11:36:48.197542] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:26:04.377 #29 NEW cov: 12178 ft: 14463 corp: 23/1822b lim: 100 exec/s: 29 rss: 73Mb L: 88/100 MS: 1 CopyPart- 00:26:04.377 [2024-06-10 11:36:48.247367] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:3674937295934324735 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.377 [2024-06-10 11:36:48.247394] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:26:04.377 [2024-06-10 11:36:48.247488] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.377 [2024-06-10 11:36:48.247506] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:26:04.377 [2024-06-10 11:36:48.247589] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.377 [2024-06-10 11:36:48.247608] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:26:04.377 [2024-06-10 11:36:48.247698] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.377 [2024-06-10 11:36:48.247719] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:26:04.377 #30 NEW cov: 12178 ft: 14479 corp: 24/1911b lim: 100 exec/s: 30 rss: 73Mb L: 89/100 MS: 1 PersAutoDict- DE: "\377\377\377\377\377\377\377\377"- 00:26:04.377 [2024-06-10 11:36:48.298026] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.377 [2024-06-10 11:36:48.298052] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:26:04.377 [2024-06-10 11:36:48.298189] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.377 [2024-06-10 11:36:48.298208] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:26:04.377 [2024-06-10 11:36:48.298306] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.377 [2024-06-10 11:36:48.298321] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:26:04.377 [2024-06-10 11:36:48.298409] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.377 [2024-06-10 11:36:48.298427] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:26:04.377 [2024-06-10 11:36:48.298518] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:4 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.377 [2024-06-10 11:36:48.298535] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:26:04.377 #31 NEW cov: 12178 ft: 14504 corp: 25/2011b lim: 100 exec/s: 31 rss: 73Mb L: 100/100 MS: 1 CopyPart- 00:26:04.637 [2024-06-10 11:36:48.347635] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:3674937295934324735 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.637 [2024-06-10 11:36:48.347663] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:26:04.637 [2024-06-10 11:36:48.347732] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.637 [2024-06-10 11:36:48.347750] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:26:04.637 [2024-06-10 11:36:48.347833] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.637 [2024-06-10 11:36:48.347849] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:26:04.637 #32 NEW cov: 12178 ft: 14780 corp: 26/2087b lim: 100 exec/s: 32 rss: 73Mb L: 76/100 MS: 1 EraseBytes- 00:26:04.637 [2024-06-10 11:36:48.408309] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:3674937295934324735 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.637 [2024-06-10 11:36:48.408336] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:26:04.637 [2024-06-10 11:36:48.408457] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.637 [2024-06-10 11:36:48.408475] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:26:04.637 [2024-06-10 11:36:48.408563] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18428729675200069631 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.637 [2024-06-10 11:36:48.408581] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:26:04.637 [2024-06-10 11:36:48.408670] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.637 [2024-06-10 11:36:48.408687] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:26:04.637 #33 NEW cov: 12178 ft: 14804 corp: 27/2176b lim: 100 exec/s: 33 rss: 73Mb L: 89/100 MS: 1 PersAutoDict- DE: "\377\377\377\377\377\377\377\377"- 00:26:04.637 [2024-06-10 11:36:48.468837] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744073709551402 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.637 [2024-06-10 11:36:48.468863] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:26:04.637 [2024-06-10 11:36:48.468972] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.637 [2024-06-10 11:36:48.469001] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:26:04.637 [2024-06-10 11:36:48.469089] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709551599 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.637 [2024-06-10 11:36:48.469103] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:26:04.637 [2024-06-10 11:36:48.469197] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.637 [2024-06-10 11:36:48.469215] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:26:04.637 [2024-06-10 11:36:48.469312] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:4 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.637 [2024-06-10 11:36:48.469333] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:26:04.637 #34 NEW cov: 12178 ft: 14809 corp: 28/2276b lim: 100 exec/s: 34 rss: 73Mb L: 100/100 MS: 1 CopyPart- 00:26:04.637 [2024-06-10 11:36:48.527875] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.637 [2024-06-10 11:36:48.527904] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:26:04.637 #35 NEW cov: 12178 ft: 15590 corp: 29/2306b lim: 100 exec/s: 35 rss: 73Mb L: 30/100 MS: 1 EraseBytes- 00:26:04.637 [2024-06-10 11:36:48.579250] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:3674937295934324735 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.637 [2024-06-10 11:36:48.579278] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:26:04.637 [2024-06-10 11:36:48.579409] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.637 [2024-06-10 11:36:48.579429] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:26:04.637 [2024-06-10 11:36:48.579520] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.637 [2024-06-10 11:36:48.579535] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:26:04.637 [2024-06-10 11:36:48.579624] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.637 [2024-06-10 11:36:48.579640] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:26:04.897 [2024-06-10 11:36:48.639802] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:3674937295934324735 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.897 [2024-06-10 11:36:48.639832] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:26:04.897 [2024-06-10 11:36:48.639929] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446742982787858431 len:256 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.897 [2024-06-10 11:36:48.639950] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:26:04.897 [2024-06-10 11:36:48.640044] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.897 [2024-06-10 11:36:48.640066] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:26:04.897 [2024-06-10 11:36:48.640151] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:18446744073709551419 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.897 [2024-06-10 11:36:48.640167] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:26:04.897 #37 NEW cov: 12178 ft: 15626 corp: 30/2405b lim: 100 exec/s: 18 rss: 74Mb L: 99/100 MS: 2 InsertRepeatedBytes-PersAutoDict- DE: "\001\000\000\000"- 00:26:04.897 #37 DONE cov: 12178 ft: 15626 corp: 30/2405b lim: 100 exec/s: 18 rss: 74Mb 00:26:04.897 ###### Recommended dictionary. ###### 00:26:04.897 "\377\377\377\377\377\377\377\377" # Uses: 2 00:26:04.897 "\001\000\000\000" # Uses: 1 00:26:04.897 ###### End of recommended dictionary. ###### 00:26:04.897 Done 37 runs in 2 second(s) 00:26:04.897 11:36:48 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_24.conf /var/tmp/suppress_nvmf_fuzz 00:26:04.897 11:36:48 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:26:04.897 11:36:48 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:26:04.897 11:36:48 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@79 -- # trap - SIGINT SIGTERM EXIT 00:26:04.897 00:26:04.897 real 1m8.692s 00:26:04.897 user 1m41.156s 00:26:04.897 sys 0m10.496s 00:26:04.897 11:36:48 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@1125 -- # xtrace_disable 00:26:04.897 11:36:48 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:26:04.897 ************************************ 00:26:04.897 END TEST nvmf_fuzz 00:26:04.897 ************************************ 00:26:04.897 11:36:48 llvm_fuzz -- fuzz/llvm.sh@60 -- # for fuzzer in "${fuzzers[@]}" 00:26:04.897 11:36:48 llvm_fuzz -- fuzz/llvm.sh@61 -- # case "$fuzzer" in 00:26:04.897 11:36:48 llvm_fuzz -- fuzz/llvm.sh@63 -- # run_test vfio_fuzz /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/run.sh 00:26:04.897 11:36:48 llvm_fuzz -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:26:04.897 11:36:48 llvm_fuzz -- common/autotest_common.sh@1106 -- # xtrace_disable 00:26:05.159 11:36:48 llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:26:05.159 ************************************ 00:26:05.159 START TEST vfio_fuzz 00:26:05.159 ************************************ 00:26:05.159 11:36:48 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/run.sh 00:26:05.159 * Looking for test storage... 00:26:05.159 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:26:05.159 11:36:48 llvm_fuzz.vfio_fuzz -- vfio/run.sh@64 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/common.sh 00:26:05.159 11:36:48 llvm_fuzz.vfio_fuzz -- setup/common.sh@6 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh 00:26:05.159 11:36:48 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:26:05.159 11:36:48 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@34 -- # set -e 00:26:05.159 11:36:48 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:26:05.159 11:36:48 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@36 -- # shopt -s extglob 00:26:05.159 11:36:48 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:26:05.159 11:36:48 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output ']' 00:26:05.159 11:36:48 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh ]] 00:26:05.159 11:36:48 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh 00:26:05.159 11:36:48 llvm_fuzz.vfio_fuzz -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:26:05.159 11:36:48 llvm_fuzz.vfio_fuzz -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:26:05.159 11:36:48 llvm_fuzz.vfio_fuzz -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:26:05.159 11:36:48 llvm_fuzz.vfio_fuzz -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:26:05.159 11:36:48 llvm_fuzz.vfio_fuzz -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:26:05.159 11:36:48 llvm_fuzz.vfio_fuzz -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:26:05.159 11:36:48 llvm_fuzz.vfio_fuzz -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:26:05.159 11:36:48 llvm_fuzz.vfio_fuzz -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:26:05.159 11:36:48 llvm_fuzz.vfio_fuzz -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:26:05.159 11:36:48 llvm_fuzz.vfio_fuzz -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:26:05.159 11:36:49 llvm_fuzz.vfio_fuzz -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:26:05.159 11:36:49 llvm_fuzz.vfio_fuzz -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:26:05.159 11:36:49 llvm_fuzz.vfio_fuzz -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:26:05.159 11:36:49 llvm_fuzz.vfio_fuzz -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:26:05.159 11:36:49 llvm_fuzz.vfio_fuzz -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:26:05.159 11:36:49 llvm_fuzz.vfio_fuzz -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:26:05.159 11:36:49 llvm_fuzz.vfio_fuzz -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:26:05.159 11:36:49 llvm_fuzz.vfio_fuzz -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:26:05.159 11:36:49 llvm_fuzz.vfio_fuzz -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:26:05.159 11:36:49 llvm_fuzz.vfio_fuzz -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:26:05.159 11:36:49 llvm_fuzz.vfio_fuzz -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:26:05.159 11:36:49 llvm_fuzz.vfio_fuzz -- common/build_config.sh@22 -- # CONFIG_CET=n 00:26:05.159 11:36:49 llvm_fuzz.vfio_fuzz -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:26:05.159 11:36:49 llvm_fuzz.vfio_fuzz -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:26:05.159 11:36:49 llvm_fuzz.vfio_fuzz -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:26:05.159 11:36:49 llvm_fuzz.vfio_fuzz -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:26:05.159 11:36:49 llvm_fuzz.vfio_fuzz -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:26:05.159 11:36:49 llvm_fuzz.vfio_fuzz -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:26:05.159 11:36:49 llvm_fuzz.vfio_fuzz -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:26:05.159 11:36:49 llvm_fuzz.vfio_fuzz -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:26:05.159 11:36:49 llvm_fuzz.vfio_fuzz -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:26:05.159 11:36:49 llvm_fuzz.vfio_fuzz -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:26:05.159 11:36:49 llvm_fuzz.vfio_fuzz -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:26:05.159 11:36:49 llvm_fuzz.vfio_fuzz -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB=/usr/lib64/clang/16/lib/libclang_rt.fuzzer_no_main-x86_64.a 00:26:05.159 11:36:49 llvm_fuzz.vfio_fuzz -- common/build_config.sh@35 -- # CONFIG_FUZZER=y 00:26:05.159 11:36:49 llvm_fuzz.vfio_fuzz -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:26:05.159 11:36:49 llvm_fuzz.vfio_fuzz -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:26:05.159 11:36:49 llvm_fuzz.vfio_fuzz -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:26:05.159 11:36:49 llvm_fuzz.vfio_fuzz -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:26:05.159 11:36:49 llvm_fuzz.vfio_fuzz -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:26:05.159 11:36:49 llvm_fuzz.vfio_fuzz -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:26:05.159 11:36:49 llvm_fuzz.vfio_fuzz -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:26:05.159 11:36:49 llvm_fuzz.vfio_fuzz -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:26:05.159 11:36:49 llvm_fuzz.vfio_fuzz -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:26:05.159 11:36:49 llvm_fuzz.vfio_fuzz -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:26:05.159 11:36:49 llvm_fuzz.vfio_fuzz -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:26:05.159 11:36:49 llvm_fuzz.vfio_fuzz -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:26:05.159 11:36:49 llvm_fuzz.vfio_fuzz -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:26:05.159 11:36:49 llvm_fuzz.vfio_fuzz -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:26:05.159 11:36:49 llvm_fuzz.vfio_fuzz -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:26:05.159 11:36:49 llvm_fuzz.vfio_fuzz -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:26:05.159 11:36:49 llvm_fuzz.vfio_fuzz -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:26:05.159 11:36:49 llvm_fuzz.vfio_fuzz -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:26:05.159 11:36:49 llvm_fuzz.vfio_fuzz -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:26:05.159 11:36:49 llvm_fuzz.vfio_fuzz -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:26:05.159 11:36:49 llvm_fuzz.vfio_fuzz -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:26:05.159 11:36:49 llvm_fuzz.vfio_fuzz -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:26:05.159 11:36:49 llvm_fuzz.vfio_fuzz -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:26:05.159 11:36:49 llvm_fuzz.vfio_fuzz -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:26:05.159 11:36:49 llvm_fuzz.vfio_fuzz -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:26:05.159 11:36:49 llvm_fuzz.vfio_fuzz -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:26:05.159 11:36:49 llvm_fuzz.vfio_fuzz -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:26:05.159 11:36:49 llvm_fuzz.vfio_fuzz -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:26:05.159 11:36:49 llvm_fuzz.vfio_fuzz -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:26:05.159 11:36:49 llvm_fuzz.vfio_fuzz -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:26:05.159 11:36:49 llvm_fuzz.vfio_fuzz -- common/build_config.sh@66 -- # CONFIG_SHARED=n 00:26:05.159 11:36:49 llvm_fuzz.vfio_fuzz -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:26:05.159 11:36:49 llvm_fuzz.vfio_fuzz -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:26:05.159 11:36:49 llvm_fuzz.vfio_fuzz -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:26:05.159 11:36:49 llvm_fuzz.vfio_fuzz -- common/build_config.sh@70 -- # CONFIG_FC=n 00:26:05.159 11:36:49 llvm_fuzz.vfio_fuzz -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:26:05.159 11:36:49 llvm_fuzz.vfio_fuzz -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:26:05.160 11:36:49 llvm_fuzz.vfio_fuzz -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:26:05.160 11:36:49 llvm_fuzz.vfio_fuzz -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:26:05.160 11:36:49 llvm_fuzz.vfio_fuzz -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:26:05.160 11:36:49 llvm_fuzz.vfio_fuzz -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:26:05.160 11:36:49 llvm_fuzz.vfio_fuzz -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES= 00:26:05.160 11:36:49 llvm_fuzz.vfio_fuzz -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:26:05.160 11:36:49 llvm_fuzz.vfio_fuzz -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:26:05.160 11:36:49 llvm_fuzz.vfio_fuzz -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:26:05.160 11:36:49 llvm_fuzz.vfio_fuzz -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:26:05.160 11:36:49 llvm_fuzz.vfio_fuzz -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:26:05.160 11:36:49 llvm_fuzz.vfio_fuzz -- common/build_config.sh@83 -- # CONFIG_URING=n 00:26:05.160 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:26:05.160 11:36:49 llvm_fuzz.vfio_fuzz -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:26:05.160 11:36:49 llvm_fuzz.vfio_fuzz -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:26:05.160 11:36:49 llvm_fuzz.vfio_fuzz -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:26:05.160 11:36:49 llvm_fuzz.vfio_fuzz -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:26:05.160 11:36:49 llvm_fuzz.vfio_fuzz -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:26:05.160 11:36:49 llvm_fuzz.vfio_fuzz -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:26:05.160 11:36:49 llvm_fuzz.vfio_fuzz -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:26:05.160 11:36:49 llvm_fuzz.vfio_fuzz -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:26:05.160 11:36:49 llvm_fuzz.vfio_fuzz -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:26:05.160 11:36:49 llvm_fuzz.vfio_fuzz -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:26:05.160 11:36:49 llvm_fuzz.vfio_fuzz -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:26:05.160 11:36:49 llvm_fuzz.vfio_fuzz -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:26:05.160 11:36:49 llvm_fuzz.vfio_fuzz -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:26:05.160 11:36:49 llvm_fuzz.vfio_fuzz -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/config.h ]] 00:26:05.160 11:36:49 llvm_fuzz.vfio_fuzz -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:26:05.160 #define SPDK_CONFIG_H 00:26:05.160 #define SPDK_CONFIG_APPS 1 00:26:05.160 #define SPDK_CONFIG_ARCH native 00:26:05.160 #undef SPDK_CONFIG_ASAN 00:26:05.160 #undef SPDK_CONFIG_AVAHI 00:26:05.160 #undef SPDK_CONFIG_CET 00:26:05.160 #define SPDK_CONFIG_COVERAGE 1 00:26:05.160 #define SPDK_CONFIG_CROSS_PREFIX 00:26:05.160 #undef SPDK_CONFIG_CRYPTO 00:26:05.160 #undef SPDK_CONFIG_CRYPTO_MLX5 00:26:05.160 #undef SPDK_CONFIG_CUSTOMOCF 00:26:05.160 #undef SPDK_CONFIG_DAOS 00:26:05.160 #define SPDK_CONFIG_DAOS_DIR 00:26:05.160 #define SPDK_CONFIG_DEBUG 1 00:26:05.160 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:26:05.160 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:26:05.160 #define SPDK_CONFIG_DPDK_INC_DIR 00:26:05.160 #define SPDK_CONFIG_DPDK_LIB_DIR 00:26:05.160 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:26:05.160 #undef SPDK_CONFIG_DPDK_UADK 00:26:05.160 #define SPDK_CONFIG_ENV /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:26:05.160 #define SPDK_CONFIG_EXAMPLES 1 00:26:05.160 #undef SPDK_CONFIG_FC 00:26:05.160 #define SPDK_CONFIG_FC_PATH 00:26:05.160 #define SPDK_CONFIG_FIO_PLUGIN 1 00:26:05.160 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:26:05.160 #undef SPDK_CONFIG_FUSE 00:26:05.160 #define SPDK_CONFIG_FUZZER 1 00:26:05.160 #define SPDK_CONFIG_FUZZER_LIB /usr/lib64/clang/16/lib/libclang_rt.fuzzer_no_main-x86_64.a 00:26:05.160 #undef SPDK_CONFIG_GOLANG 00:26:05.160 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:26:05.160 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:26:05.160 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:26:05.160 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:26:05.160 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:26:05.160 #undef SPDK_CONFIG_HAVE_LIBBSD 00:26:05.160 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:26:05.160 #define SPDK_CONFIG_IDXD 1 00:26:05.160 #define SPDK_CONFIG_IDXD_KERNEL 1 00:26:05.160 #undef SPDK_CONFIG_IPSEC_MB 00:26:05.160 #define SPDK_CONFIG_IPSEC_MB_DIR 00:26:05.160 #define SPDK_CONFIG_ISAL 1 00:26:05.160 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:26:05.160 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:26:05.160 #define SPDK_CONFIG_LIBDIR 00:26:05.160 #undef SPDK_CONFIG_LTO 00:26:05.160 #define SPDK_CONFIG_MAX_LCORES 00:26:05.160 #define SPDK_CONFIG_NVME_CUSE 1 00:26:05.160 #undef SPDK_CONFIG_OCF 00:26:05.160 #define SPDK_CONFIG_OCF_PATH 00:26:05.160 #define SPDK_CONFIG_OPENSSL_PATH 00:26:05.160 #undef SPDK_CONFIG_PGO_CAPTURE 00:26:05.160 #define SPDK_CONFIG_PGO_DIR 00:26:05.160 #undef SPDK_CONFIG_PGO_USE 00:26:05.160 #define SPDK_CONFIG_PREFIX /usr/local 00:26:05.160 #undef SPDK_CONFIG_RAID5F 00:26:05.160 #undef SPDK_CONFIG_RBD 00:26:05.160 #define SPDK_CONFIG_RDMA 1 00:26:05.160 #define SPDK_CONFIG_RDMA_PROV verbs 00:26:05.160 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:26:05.160 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:26:05.160 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:26:05.160 #undef SPDK_CONFIG_SHARED 00:26:05.160 #undef SPDK_CONFIG_SMA 00:26:05.160 #define SPDK_CONFIG_TESTS 1 00:26:05.160 #undef SPDK_CONFIG_TSAN 00:26:05.160 #define SPDK_CONFIG_UBLK 1 00:26:05.160 #define SPDK_CONFIG_UBSAN 1 00:26:05.160 #undef SPDK_CONFIG_UNIT_TESTS 00:26:05.160 #undef SPDK_CONFIG_URING 00:26:05.160 #define SPDK_CONFIG_URING_PATH 00:26:05.160 #undef SPDK_CONFIG_URING_ZNS 00:26:05.160 #undef SPDK_CONFIG_USDT 00:26:05.160 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:26:05.160 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:26:05.160 #define SPDK_CONFIG_VFIO_USER 1 00:26:05.160 #define SPDK_CONFIG_VFIO_USER_DIR 00:26:05.160 #define SPDK_CONFIG_VHOST 1 00:26:05.160 #define SPDK_CONFIG_VIRTIO 1 00:26:05.160 #undef SPDK_CONFIG_VTUNE 00:26:05.160 #define SPDK_CONFIG_VTUNE_DIR 00:26:05.160 #define SPDK_CONFIG_WERROR 1 00:26:05.160 #define SPDK_CONFIG_WPDK_DIR 00:26:05.160 #undef SPDK_CONFIG_XNVME 00:26:05.160 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:26:05.160 11:36:49 llvm_fuzz.vfio_fuzz -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:26:05.160 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:26:05.160 11:36:49 llvm_fuzz.vfio_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:05.160 11:36:49 llvm_fuzz.vfio_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:05.160 11:36:49 llvm_fuzz.vfio_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:05.160 11:36:49 llvm_fuzz.vfio_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:05.160 11:36:49 llvm_fuzz.vfio_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:05.160 11:36:49 llvm_fuzz.vfio_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:05.160 11:36:49 llvm_fuzz.vfio_fuzz -- paths/export.sh@5 -- # export PATH 00:26:05.160 11:36:49 llvm_fuzz.vfio_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:05.160 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:26:05.160 11:36:49 llvm_fuzz.vfio_fuzz -- pm/common@6 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:26:05.160 11:36:49 llvm_fuzz.vfio_fuzz -- pm/common@6 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:26:05.160 11:36:49 llvm_fuzz.vfio_fuzz -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:26:05.160 11:36:49 llvm_fuzz.vfio_fuzz -- pm/common@7 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/../../../ 00:26:05.160 11:36:49 llvm_fuzz.vfio_fuzz -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:26:05.160 11:36:49 llvm_fuzz.vfio_fuzz -- pm/common@64 -- # TEST_TAG=N/A 00:26:05.160 11:36:49 llvm_fuzz.vfio_fuzz -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.run_test_name 00:26:05.160 11:36:49 llvm_fuzz.vfio_fuzz -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power 00:26:05.160 11:36:49 llvm_fuzz.vfio_fuzz -- pm/common@68 -- # uname -s 00:26:05.160 11:36:49 llvm_fuzz.vfio_fuzz -- pm/common@68 -- # PM_OS=Linux 00:26:05.160 11:36:49 llvm_fuzz.vfio_fuzz -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:26:05.160 11:36:49 llvm_fuzz.vfio_fuzz -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:26:05.160 11:36:49 llvm_fuzz.vfio_fuzz -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:26:05.160 11:36:49 llvm_fuzz.vfio_fuzz -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:26:05.160 11:36:49 llvm_fuzz.vfio_fuzz -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:26:05.160 11:36:49 llvm_fuzz.vfio_fuzz -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:26:05.160 11:36:49 llvm_fuzz.vfio_fuzz -- pm/common@76 -- # SUDO[0]= 00:26:05.160 11:36:49 llvm_fuzz.vfio_fuzz -- pm/common@76 -- # SUDO[1]='sudo -E' 00:26:05.160 11:36:49 llvm_fuzz.vfio_fuzz -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:26:05.161 11:36:49 llvm_fuzz.vfio_fuzz -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:26:05.161 11:36:49 llvm_fuzz.vfio_fuzz -- pm/common@81 -- # [[ Linux == Linux ]] 00:26:05.161 11:36:49 llvm_fuzz.vfio_fuzz -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:26:05.161 11:36:49 llvm_fuzz.vfio_fuzz -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:26:05.161 11:36:49 llvm_fuzz.vfio_fuzz -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:26:05.161 11:36:49 llvm_fuzz.vfio_fuzz -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:26:05.161 11:36:49 llvm_fuzz.vfio_fuzz -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power ]] 00:26:05.161 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@58 -- # : 0 00:26:05.161 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:26:05.161 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@62 -- # : 0 00:26:05.161 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:26:05.161 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@64 -- # : 0 00:26:05.161 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:26:05.161 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@66 -- # : 1 00:26:05.161 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:26:05.161 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@68 -- # : 0 00:26:05.161 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:26:05.161 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@70 -- # : 00:26:05.161 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:26:05.161 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@72 -- # : 0 00:26:05.161 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:26:05.161 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@74 -- # : 0 00:26:05.161 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:26:05.161 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@76 -- # : 0 00:26:05.161 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:26:05.161 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@78 -- # : 0 00:26:05.161 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:26:05.161 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@80 -- # : 0 00:26:05.161 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:26:05.161 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@82 -- # : 0 00:26:05.161 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:26:05.161 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@84 -- # : 0 00:26:05.161 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:26:05.161 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@86 -- # : 0 00:26:05.161 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:26:05.161 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@88 -- # : 0 00:26:05.161 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:26:05.161 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@90 -- # : 0 00:26:05.161 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:26:05.161 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@92 -- # : 0 00:26:05.161 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:26:05.161 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@94 -- # : 0 00:26:05.161 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:26:05.161 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@96 -- # : 0 00:26:05.161 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:26:05.161 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@98 -- # : 1 00:26:05.161 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:26:05.161 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@100 -- # : 1 00:26:05.161 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:26:05.161 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@102 -- # : rdma 00:26:05.161 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:26:05.161 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@104 -- # : 0 00:26:05.161 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:26:05.161 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@106 -- # : 0 00:26:05.161 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:26:05.161 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@108 -- # : 0 00:26:05.161 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:26:05.161 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@110 -- # : 0 00:26:05.161 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:26:05.161 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@112 -- # : 0 00:26:05.161 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:26:05.161 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@114 -- # : 0 00:26:05.161 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:26:05.161 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@116 -- # : 0 00:26:05.161 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:26:05.161 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@118 -- # : 0 00:26:05.161 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:26:05.161 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@120 -- # : 0 00:26:05.161 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:26:05.161 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@122 -- # : 1 00:26:05.161 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:26:05.161 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@124 -- # : 00:26:05.161 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:26:05.161 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@126 -- # : 0 00:26:05.161 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:26:05.161 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@128 -- # : 0 00:26:05.161 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:26:05.161 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@130 -- # : 0 00:26:05.161 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:26:05.161 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@132 -- # : 0 00:26:05.161 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:26:05.161 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@134 -- # : 0 00:26:05.161 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:26:05.161 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@136 -- # : 0 00:26:05.161 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:26:05.161 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@138 -- # : 00:26:05.161 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:26:05.161 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@140 -- # : true 00:26:05.161 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:26:05.161 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@142 -- # : 0 00:26:05.161 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:26:05.161 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@144 -- # : 0 00:26:05.161 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:26:05.161 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@146 -- # : 0 00:26:05.161 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:26:05.161 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@148 -- # : 0 00:26:05.161 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:26:05.161 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@150 -- # : 0 00:26:05.161 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:26:05.161 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@152 -- # : 0 00:26:05.161 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:26:05.161 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@154 -- # : 00:26:05.161 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:26:05.161 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@156 -- # : 0 00:26:05.161 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:26:05.161 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@158 -- # : 0 00:26:05.161 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:26:05.161 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@160 -- # : 0 00:26:05.161 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:26:05.161 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@162 -- # : 0 00:26:05.161 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:26:05.161 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@164 -- # : 0 00:26:05.161 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:26:05.161 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@167 -- # : 00:26:05.161 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:26:05.161 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@169 -- # : 0 00:26:05.161 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:26:05.161 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@171 -- # : 0 00:26:05.161 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:26:05.161 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:26:05.161 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:26:05.161 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:26:05.161 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:26:05.161 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:26:05.161 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:26:05.162 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:26:05.162 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:26:05.162 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:26:05.162 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:26:05.162 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:26:05.162 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:26:05.162 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:26:05.162 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:26:05.162 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:26:05.162 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:26:05.162 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:26:05.162 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:26:05.162 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:26:05.162 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:26:05.162 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@200 -- # cat 00:26:05.162 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:26:05.162 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:26:05.162 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:26:05.162 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:26:05.162 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:26:05.162 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:26:05.162 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:26:05.162 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:26:05.162 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:26:05.162 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:26:05.162 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:26:05.162 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:26:05.162 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:26:05.162 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:26:05.162 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:26:05.162 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:26:05.162 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:26:05.162 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:26:05.162 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:26:05.162 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:26:05.162 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@263 -- # export valgrind= 00:26:05.162 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@263 -- # valgrind= 00:26:05.162 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@269 -- # uname -s 00:26:05.162 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:26:05.162 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:26:05.162 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:26:05.162 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:26:05.162 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:26:05.162 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:26:05.162 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@279 -- # MAKE=make 00:26:05.162 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j72 00:26:05.162 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:26:05.162 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:26:05.162 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:26:05.162 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@299 -- # TEST_MODE= 00:26:05.162 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@318 -- # [[ -z 3337946 ]] 00:26:05.162 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@318 -- # kill -0 3337946 00:26:05.162 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@1679 -- # set_test_storage 2147483648 00:26:05.162 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:26:05.162 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:26:05.162 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@331 -- # local mount target_dir 00:26:05.162 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:26:05.162 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:26:05.162 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:26:05.423 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:26:05.423 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.7k9cFP 00:26:05.423 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:26:05.423 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:26:05.423 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:26:05.423 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@355 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio /tmp/spdk.7k9cFP/tests/vfio /tmp/spdk.7k9cFP 00:26:05.423 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:26:05.423 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:26:05.423 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@327 -- # df -T 00:26:05.423 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:26:05.423 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_devtmpfs 00:26:05.423 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:26:05.423 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=67108864 00:26:05.423 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:26:05.423 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:26:05.423 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:26:05.423 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/pmem0 00:26:05.423 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=ext2 00:26:05.423 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=957775872 00:26:05.423 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=5284429824 00:26:05.423 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=4326653952 00:26:05.423 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:26:05.423 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_root 00:26:05.423 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=overlay 00:26:05.423 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=82550304768 00:26:05.423 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=94508429312 00:26:05.423 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=11958124544 00:26:05.423 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:26:05.423 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:26:05.423 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:26:05.423 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=47249502208 00:26:05.423 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=47254212608 00:26:05.423 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=4710400 00:26:05.423 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:26:05.423 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:26:05.423 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:26:05.423 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=18895695872 00:26:05.423 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=18901688320 00:26:05.423 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=5992448 00:26:05.423 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:26:05.423 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:26:05.423 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:26:05.423 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=47253446656 00:26:05.423 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=47254216704 00:26:05.423 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=770048 00:26:05.423 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:26:05.423 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:26:05.423 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:26:05.423 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=9450835968 00:26:05.423 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=9450840064 00:26:05.423 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:26:05.423 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:26:05.423 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:26:05.423 * Looking for test storage... 00:26:05.423 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@368 -- # local target_space new_size 00:26:05.423 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:26:05.423 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@372 -- # df /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:26:05.423 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:26:05.423 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@372 -- # mount=/ 00:26:05.423 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@374 -- # target_space=82550304768 00:26:05.423 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:26:05.423 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:26:05.423 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@380 -- # [[ overlay == tmpfs ]] 00:26:05.423 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@380 -- # [[ overlay == ramfs ]] 00:26:05.423 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:26:05.423 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@381 -- # new_size=14172717056 00:26:05.423 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:26:05.423 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:26:05.423 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:26:05.423 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:26:05.423 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:26:05.423 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@389 -- # return 0 00:26:05.423 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@1681 -- # set -o errtrace 00:26:05.423 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@1682 -- # shopt -s extdebug 00:26:05.423 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@1683 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:26:05.423 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@1685 -- # PS4=' \t $test_domain -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:26:05.423 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@1686 -- # true 00:26:05.423 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@1688 -- # xtrace_fd 00:26:05.423 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:26:05.423 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:26:05.423 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@27 -- # exec 00:26:05.423 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@29 -- # exec 00:26:05.423 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@31 -- # xtrace_restore 00:26:05.423 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:26:05.423 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:26:05.423 11:36:49 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@18 -- # set -x 00:26:05.424 11:36:49 llvm_fuzz.vfio_fuzz -- vfio/run.sh@65 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/../common.sh 00:26:05.424 11:36:49 llvm_fuzz.vfio_fuzz -- ../common.sh@8 -- # pids=() 00:26:05.424 11:36:49 llvm_fuzz.vfio_fuzz -- vfio/run.sh@67 -- # fuzzfile=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c 00:26:05.424 11:36:49 llvm_fuzz.vfio_fuzz -- vfio/run.sh@68 -- # grep -c '\.fn =' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c 00:26:05.424 11:36:49 llvm_fuzz.vfio_fuzz -- vfio/run.sh@68 -- # fuzz_num=7 00:26:05.424 11:36:49 llvm_fuzz.vfio_fuzz -- vfio/run.sh@69 -- # (( fuzz_num != 0 )) 00:26:05.424 11:36:49 llvm_fuzz.vfio_fuzz -- vfio/run.sh@71 -- # trap 'cleanup /tmp/vfio-user-* /var/tmp/suppress_vfio_fuzz; exit 1' SIGINT SIGTERM EXIT 00:26:05.424 11:36:49 llvm_fuzz.vfio_fuzz -- vfio/run.sh@74 -- # mem_size=0 00:26:05.424 11:36:49 llvm_fuzz.vfio_fuzz -- vfio/run.sh@75 -- # [[ 1 -eq 1 ]] 00:26:05.424 11:36:49 llvm_fuzz.vfio_fuzz -- vfio/run.sh@76 -- # start_llvm_fuzz_short 7 1 00:26:05.424 11:36:49 llvm_fuzz.vfio_fuzz -- ../common.sh@69 -- # local fuzz_num=7 00:26:05.424 11:36:49 llvm_fuzz.vfio_fuzz -- ../common.sh@70 -- # local time=1 00:26:05.424 11:36:49 llvm_fuzz.vfio_fuzz -- ../common.sh@72 -- # (( i = 0 )) 00:26:05.424 11:36:49 llvm_fuzz.vfio_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:26:05.424 11:36:49 llvm_fuzz.vfio_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 0 1 0x1 00:26:05.424 11:36:49 llvm_fuzz.vfio_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=0 00:26:05.424 11:36:49 llvm_fuzz.vfio_fuzz -- vfio/run.sh@23 -- # local timen=1 00:26:05.424 11:36:49 llvm_fuzz.vfio_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:26:05.424 11:36:49 llvm_fuzz.vfio_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 00:26:05.424 11:36:49 llvm_fuzz.vfio_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-0 00:26:05.424 11:36:49 llvm_fuzz.vfio_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-0/domain/1 00:26:05.424 11:36:49 llvm_fuzz.vfio_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-0/domain/2 00:26:05.424 11:36:49 llvm_fuzz.vfio_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-0/fuzz_vfio_json.conf 00:26:05.424 11:36:49 llvm_fuzz.vfio_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:26:05.424 11:36:49 llvm_fuzz.vfio_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:26:05.424 11:36:49 llvm_fuzz.vfio_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-0 /tmp/vfio-user-0/domain/1 /tmp/vfio-user-0/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 00:26:05.424 11:36:49 llvm_fuzz.vfio_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-0/domain/1%; 00:26:05.424 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-0/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:26:05.424 11:36:49 llvm_fuzz.vfio_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:26:05.424 11:36:49 llvm_fuzz.vfio_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:26:05.424 11:36:49 llvm_fuzz.vfio_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-0/domain/1 -c /tmp/vfio-user-0/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 -Y /tmp/vfio-user-0/domain/2 -r /tmp/vfio-user-0/spdk0.sock -Z 0 00:26:05.424 [2024-06-10 11:36:49.188594] Starting SPDK v24.09-pre git sha1 a3f6419f1 / DPDK 24.03.0 initialization... 00:26:05.424 [2024-06-10 11:36:49.188665] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3337994 ] 00:26:05.424 EAL: No free 2048 kB hugepages reported on node 1 00:26:05.424 [2024-06-10 11:36:49.270011] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:05.424 [2024-06-10 11:36:49.358531] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:26:05.683 INFO: Running with entropic power schedule (0xFF, 100). 00:26:05.683 INFO: Seed: 2389369985 00:26:05.683 INFO: Loaded 1 modules (354952 inline 8-bit counters): 354952 [0x296804c, 0x29bead4), 00:26:05.683 INFO: Loaded 1 PC tables (354952 PCs): 354952 [0x29bead8,0x2f29358), 00:26:05.683 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 00:26:05.683 INFO: A corpus is not provided, starting from an empty corpus 00:26:05.683 #2 INITED exec/s: 0 rss: 66Mb 00:26:05.683 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:26:05.683 This may also happen if the target rejected all inputs we tried so far 00:26:05.683 [2024-06-10 11:36:49.609205] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-0/domain/2: enabling controller 00:26:06.200 NEW_FUNC[1/646]: 0x4828a0 in fuzz_vfio_user_region_rw /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:84 00:26:06.200 NEW_FUNC[2/646]: 0x4883b0 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:26:06.200 #12 NEW cov: 10915 ft: 10861 corp: 2/7b lim: 6 exec/s: 0 rss: 73Mb L: 6/6 MS: 5 ChangeByte-InsertRepeatedBytes-ChangeByte-CopyPart-CrossOver- 00:26:06.459 #13 NEW cov: 10936 ft: 13961 corp: 3/13b lim: 6 exec/s: 0 rss: 74Mb L: 6/6 MS: 1 CMP- DE: "\000\000\000\000"- 00:26:06.717 NEW_FUNC[1/1]: 0x1a43680 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:601 00:26:06.717 #24 NEW cov: 10953 ft: 14436 corp: 4/19b lim: 6 exec/s: 0 rss: 74Mb L: 6/6 MS: 1 ShuffleBytes- 00:26:06.717 #31 NEW cov: 10953 ft: 15347 corp: 5/25b lim: 6 exec/s: 31 rss: 75Mb L: 6/6 MS: 2 CrossOver-CrossOver- 00:26:06.976 #32 NEW cov: 10953 ft: 16598 corp: 6/31b lim: 6 exec/s: 32 rss: 75Mb L: 6/6 MS: 1 CopyPart- 00:26:07.234 #33 NEW cov: 10953 ft: 17289 corp: 7/37b lim: 6 exec/s: 33 rss: 75Mb L: 6/6 MS: 1 ShuffleBytes- 00:26:07.234 #39 NEW cov: 10953 ft: 17695 corp: 8/43b lim: 6 exec/s: 39 rss: 75Mb L: 6/6 MS: 1 CopyPart- 00:26:07.493 #40 NEW cov: 10953 ft: 17757 corp: 9/49b lim: 6 exec/s: 40 rss: 75Mb L: 6/6 MS: 1 PersAutoDict- DE: "\000\000\000\000"- 00:26:07.751 #41 NEW cov: 10960 ft: 17873 corp: 10/55b lim: 6 exec/s: 41 rss: 75Mb L: 6/6 MS: 1 ChangeBinInt- 00:26:07.751 #47 NEW cov: 10960 ft: 17925 corp: 11/61b lim: 6 exec/s: 23 rss: 75Mb L: 6/6 MS: 1 CrossOver- 00:26:07.751 #47 DONE cov: 10960 ft: 17925 corp: 11/61b lim: 6 exec/s: 23 rss: 75Mb 00:26:07.751 ###### Recommended dictionary. ###### 00:26:07.751 "\000\000\000\000" # Uses: 1 00:26:07.751 ###### End of recommended dictionary. ###### 00:26:07.751 Done 47 runs in 2 second(s) 00:26:07.751 [2024-06-10 11:36:51.656475] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-0/domain/2: disabling controller 00:26:08.010 11:36:51 llvm_fuzz.vfio_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-0 /var/tmp/suppress_vfio_fuzz 00:26:08.010 11:36:51 llvm_fuzz.vfio_fuzz -- ../common.sh@72 -- # (( i++ )) 00:26:08.010 11:36:51 llvm_fuzz.vfio_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:26:08.010 11:36:51 llvm_fuzz.vfio_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 1 1 0x1 00:26:08.010 11:36:51 llvm_fuzz.vfio_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=1 00:26:08.010 11:36:51 llvm_fuzz.vfio_fuzz -- vfio/run.sh@23 -- # local timen=1 00:26:08.010 11:36:51 llvm_fuzz.vfio_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:26:08.010 11:36:51 llvm_fuzz.vfio_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 00:26:08.010 11:36:51 llvm_fuzz.vfio_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-1 00:26:08.010 11:36:51 llvm_fuzz.vfio_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-1/domain/1 00:26:08.010 11:36:51 llvm_fuzz.vfio_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-1/domain/2 00:26:08.010 11:36:51 llvm_fuzz.vfio_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-1/fuzz_vfio_json.conf 00:26:08.010 11:36:51 llvm_fuzz.vfio_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:26:08.010 11:36:51 llvm_fuzz.vfio_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:26:08.010 11:36:51 llvm_fuzz.vfio_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-1 /tmp/vfio-user-1/domain/1 /tmp/vfio-user-1/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 00:26:08.010 11:36:51 llvm_fuzz.vfio_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-1/domain/1%; 00:26:08.010 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-1/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:26:08.010 11:36:51 llvm_fuzz.vfio_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:26:08.010 11:36:51 llvm_fuzz.vfio_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:26:08.010 11:36:51 llvm_fuzz.vfio_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-1/domain/1 -c /tmp/vfio-user-1/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 -Y /tmp/vfio-user-1/domain/2 -r /tmp/vfio-user-1/spdk1.sock -Z 1 00:26:08.279 [2024-06-10 11:36:51.984596] Starting SPDK v24.09-pre git sha1 a3f6419f1 / DPDK 24.03.0 initialization... 00:26:08.279 [2024-06-10 11:36:51.984671] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3338381 ] 00:26:08.279 EAL: No free 2048 kB hugepages reported on node 1 00:26:08.279 [2024-06-10 11:36:52.068994] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:08.279 [2024-06-10 11:36:52.157512] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:26:08.540 INFO: Running with entropic power schedule (0xFF, 100). 00:26:08.540 INFO: Seed: 899398872 00:26:08.540 INFO: Loaded 1 modules (354952 inline 8-bit counters): 354952 [0x296804c, 0x29bead4), 00:26:08.540 INFO: Loaded 1 PC tables (354952 PCs): 354952 [0x29bead8,0x2f29358), 00:26:08.540 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 00:26:08.540 INFO: A corpus is not provided, starting from an empty corpus 00:26:08.540 #2 INITED exec/s: 0 rss: 66Mb 00:26:08.540 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:26:08.540 This may also happen if the target rejected all inputs we tried so far 00:26:08.540 [2024-06-10 11:36:52.413211] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-1/domain/2: enabling controller 00:26:08.540 [2024-06-10 11:36:52.466279] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:26:08.540 [2024-06-10 11:36:52.466307] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:26:08.540 [2024-06-10 11:36:52.466324] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:26:09.056 NEW_FUNC[1/647]: 0x482e40 in fuzz_vfio_user_version /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:71 00:26:09.056 NEW_FUNC[2/647]: 0x4883b0 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:26:09.056 #31 NEW cov: 10910 ft: 10750 corp: 2/5b lim: 4 exec/s: 0 rss: 72Mb L: 4/4 MS: 4 CopyPart-ChangeByte-InsertByte-CrossOver- 00:26:09.056 [2024-06-10 11:36:52.958259] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:26:09.056 [2024-06-10 11:36:52.958299] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:26:09.056 [2024-06-10 11:36:52.958318] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:26:09.314 NEW_FUNC[1/1]: 0x1707d50 in nvme_pcie_vtophys /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_pcie_common.c:29 00:26:09.314 #52 NEW cov: 10929 ft: 14485 corp: 3/9b lim: 4 exec/s: 0 rss: 73Mb L: 4/4 MS: 1 ShuffleBytes- 00:26:09.314 [2024-06-10 11:36:53.163557] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:26:09.314 [2024-06-10 11:36:53.163583] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:26:09.314 [2024-06-10 11:36:53.163599] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:26:09.572 NEW_FUNC[1/1]: 0x1a43680 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:601 00:26:09.572 #55 NEW cov: 10949 ft: 15075 corp: 4/13b lim: 4 exec/s: 0 rss: 74Mb L: 4/4 MS: 3 EraseBytes-CrossOver-InsertByte- 00:26:09.572 [2024-06-10 11:36:53.352547] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:26:09.572 [2024-06-10 11:36:53.352572] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:26:09.572 [2024-06-10 11:36:53.352588] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:26:09.572 #56 NEW cov: 10949 ft: 16374 corp: 5/17b lim: 4 exec/s: 56 rss: 74Mb L: 4/4 MS: 1 ChangeBinInt- 00:26:09.830 [2024-06-10 11:36:53.553439] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:26:09.830 [2024-06-10 11:36:53.553464] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:26:09.830 [2024-06-10 11:36:53.553481] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:26:09.830 #57 NEW cov: 10949 ft: 16577 corp: 6/21b lim: 4 exec/s: 57 rss: 74Mb L: 4/4 MS: 1 ShuffleBytes- 00:26:09.830 [2024-06-10 11:36:53.744599] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:26:09.830 [2024-06-10 11:36:53.744623] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:26:09.830 [2024-06-10 11:36:53.744640] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:26:10.088 #64 NEW cov: 10949 ft: 16749 corp: 7/25b lim: 4 exec/s: 64 rss: 74Mb L: 4/4 MS: 2 EraseBytes-CrossOver- 00:26:10.088 [2024-06-10 11:36:53.935679] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:26:10.088 [2024-06-10 11:36:53.935703] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:26:10.088 [2024-06-10 11:36:53.935720] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:26:10.346 #65 NEW cov: 10949 ft: 17056 corp: 8/29b lim: 4 exec/s: 65 rss: 74Mb L: 4/4 MS: 1 ChangeBit- 00:26:10.346 [2024-06-10 11:36:54.126845] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:26:10.346 [2024-06-10 11:36:54.126869] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:26:10.346 [2024-06-10 11:36:54.126885] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:26:10.346 #66 NEW cov: 10956 ft: 17441 corp: 9/33b lim: 4 exec/s: 66 rss: 74Mb L: 4/4 MS: 1 ChangeByte- 00:26:10.604 [2024-06-10 11:36:54.330884] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:26:10.604 [2024-06-10 11:36:54.330906] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:26:10.604 [2024-06-10 11:36:54.330923] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:26:10.604 #75 NEW cov: 10956 ft: 17693 corp: 10/37b lim: 4 exec/s: 37 rss: 74Mb L: 4/4 MS: 4 CopyPart-ShuffleBytes-InsertByte-InsertByte- 00:26:10.604 #75 DONE cov: 10956 ft: 17693 corp: 10/37b lim: 4 exec/s: 37 rss: 74Mb 00:26:10.604 Done 75 runs in 2 second(s) 00:26:10.604 [2024-06-10 11:36:54.470470] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-1/domain/2: disabling controller 00:26:10.862 11:36:54 llvm_fuzz.vfio_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-1 /var/tmp/suppress_vfio_fuzz 00:26:10.862 11:36:54 llvm_fuzz.vfio_fuzz -- ../common.sh@72 -- # (( i++ )) 00:26:10.862 11:36:54 llvm_fuzz.vfio_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:26:10.862 11:36:54 llvm_fuzz.vfio_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 2 1 0x1 00:26:10.862 11:36:54 llvm_fuzz.vfio_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=2 00:26:10.862 11:36:54 llvm_fuzz.vfio_fuzz -- vfio/run.sh@23 -- # local timen=1 00:26:10.862 11:36:54 llvm_fuzz.vfio_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:26:10.862 11:36:54 llvm_fuzz.vfio_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 00:26:10.862 11:36:54 llvm_fuzz.vfio_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-2 00:26:10.862 11:36:54 llvm_fuzz.vfio_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-2/domain/1 00:26:10.862 11:36:54 llvm_fuzz.vfio_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-2/domain/2 00:26:10.862 11:36:54 llvm_fuzz.vfio_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-2/fuzz_vfio_json.conf 00:26:10.862 11:36:54 llvm_fuzz.vfio_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:26:10.862 11:36:54 llvm_fuzz.vfio_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:26:10.862 11:36:54 llvm_fuzz.vfio_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-2 /tmp/vfio-user-2/domain/1 /tmp/vfio-user-2/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 00:26:10.862 11:36:54 llvm_fuzz.vfio_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-2/domain/1%; 00:26:10.862 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-2/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:26:10.862 11:36:54 llvm_fuzz.vfio_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:26:10.862 11:36:54 llvm_fuzz.vfio_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:26:10.862 11:36:54 llvm_fuzz.vfio_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-2/domain/1 -c /tmp/vfio-user-2/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 -Y /tmp/vfio-user-2/domain/2 -r /tmp/vfio-user-2/spdk2.sock -Z 2 00:26:10.862 [2024-06-10 11:36:54.797147] Starting SPDK v24.09-pre git sha1 a3f6419f1 / DPDK 24.03.0 initialization... 00:26:10.862 [2024-06-10 11:36:54.797221] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3338787 ] 00:26:11.120 EAL: No free 2048 kB hugepages reported on node 1 00:26:11.120 [2024-06-10 11:36:54.881208] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:11.120 [2024-06-10 11:36:54.969338] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:26:11.377 INFO: Running with entropic power schedule (0xFF, 100). 00:26:11.377 INFO: Seed: 3693395225 00:26:11.377 INFO: Loaded 1 modules (354952 inline 8-bit counters): 354952 [0x296804c, 0x29bead4), 00:26:11.377 INFO: Loaded 1 PC tables (354952 PCs): 354952 [0x29bead8,0x2f29358), 00:26:11.377 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 00:26:11.377 INFO: A corpus is not provided, starting from an empty corpus 00:26:11.377 #2 INITED exec/s: 0 rss: 66Mb 00:26:11.377 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:26:11.377 This may also happen if the target rejected all inputs we tried so far 00:26:11.377 [2024-06-10 11:36:55.210903] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-2/domain/2: enabling controller 00:26:11.378 [2024-06-10 11:36:55.289369] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:26:11.894 NEW_FUNC[1/647]: 0x483820 in fuzz_vfio_user_get_region_info /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:103 00:26:11.894 NEW_FUNC[2/647]: 0x4883b0 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:26:11.894 #54 NEW cov: 10897 ft: 10502 corp: 2/9b lim: 8 exec/s: 0 rss: 72Mb L: 8/8 MS: 2 CopyPart-InsertRepeatedBytes- 00:26:11.894 [2024-06-10 11:36:55.790290] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:26:12.151 #55 NEW cov: 10915 ft: 14273 corp: 3/17b lim: 8 exec/s: 0 rss: 73Mb L: 8/8 MS: 1 ChangeBit- 00:26:12.151 [2024-06-10 11:36:55.989142] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:26:12.151 NEW_FUNC[1/1]: 0x1a43680 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:601 00:26:12.151 #66 NEW cov: 10932 ft: 15917 corp: 4/25b lim: 8 exec/s: 0 rss: 74Mb L: 8/8 MS: 1 ChangeByte- 00:26:12.409 [2024-06-10 11:36:56.181063] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:26:12.409 #67 NEW cov: 10932 ft: 16650 corp: 5/33b lim: 8 exec/s: 67 rss: 74Mb L: 8/8 MS: 1 ChangeBit- 00:26:12.667 [2024-06-10 11:36:56.362833] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:26:12.667 #68 NEW cov: 10932 ft: 17226 corp: 6/41b lim: 8 exec/s: 68 rss: 74Mb L: 8/8 MS: 1 CrossOver- 00:26:12.667 [2024-06-10 11:36:56.541430] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:26:12.925 #69 NEW cov: 10932 ft: 17403 corp: 7/49b lim: 8 exec/s: 69 rss: 74Mb L: 8/8 MS: 1 ChangeByte- 00:26:12.925 [2024-06-10 11:36:56.726018] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:26:12.925 #79 NEW cov: 10932 ft: 17592 corp: 8/57b lim: 8 exec/s: 79 rss: 74Mb L: 8/8 MS: 5 ChangeBit-InsertByte-InsertRepeatedBytes-ChangeBit-CopyPart- 00:26:13.183 [2024-06-10 11:36:56.916635] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:26:13.183 #80 NEW cov: 10939 ft: 17943 corp: 9/65b lim: 8 exec/s: 80 rss: 74Mb L: 8/8 MS: 1 CrossOver- 00:26:13.183 [2024-06-10 11:36:57.096503] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:26:13.441 #85 NEW cov: 10939 ft: 18313 corp: 10/73b lim: 8 exec/s: 42 rss: 74Mb L: 8/8 MS: 5 EraseBytes-CrossOver-ShuffleBytes-ChangeBit-InsertByte- 00:26:13.441 #85 DONE cov: 10939 ft: 18313 corp: 10/73b lim: 8 exec/s: 42 rss: 74Mb 00:26:13.441 Done 85 runs in 2 second(s) 00:26:13.441 [2024-06-10 11:36:57.224453] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-2/domain/2: disabling controller 00:26:13.699 11:36:57 llvm_fuzz.vfio_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-2 /var/tmp/suppress_vfio_fuzz 00:26:13.699 11:36:57 llvm_fuzz.vfio_fuzz -- ../common.sh@72 -- # (( i++ )) 00:26:13.699 11:36:57 llvm_fuzz.vfio_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:26:13.699 11:36:57 llvm_fuzz.vfio_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 3 1 0x1 00:26:13.700 11:36:57 llvm_fuzz.vfio_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=3 00:26:13.700 11:36:57 llvm_fuzz.vfio_fuzz -- vfio/run.sh@23 -- # local timen=1 00:26:13.700 11:36:57 llvm_fuzz.vfio_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:26:13.700 11:36:57 llvm_fuzz.vfio_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 00:26:13.700 11:36:57 llvm_fuzz.vfio_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-3 00:26:13.700 11:36:57 llvm_fuzz.vfio_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-3/domain/1 00:26:13.700 11:36:57 llvm_fuzz.vfio_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-3/domain/2 00:26:13.700 11:36:57 llvm_fuzz.vfio_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-3/fuzz_vfio_json.conf 00:26:13.700 11:36:57 llvm_fuzz.vfio_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:26:13.700 11:36:57 llvm_fuzz.vfio_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:26:13.700 11:36:57 llvm_fuzz.vfio_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-3 /tmp/vfio-user-3/domain/1 /tmp/vfio-user-3/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 00:26:13.700 11:36:57 llvm_fuzz.vfio_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-3/domain/1%; 00:26:13.700 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-3/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:26:13.700 11:36:57 llvm_fuzz.vfio_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:26:13.700 11:36:57 llvm_fuzz.vfio_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:26:13.700 11:36:57 llvm_fuzz.vfio_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-3/domain/1 -c /tmp/vfio-user-3/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 -Y /tmp/vfio-user-3/domain/2 -r /tmp/vfio-user-3/spdk3.sock -Z 3 00:26:13.700 [2024-06-10 11:36:57.551972] Starting SPDK v24.09-pre git sha1 a3f6419f1 / DPDK 24.03.0 initialization... 00:26:13.700 [2024-06-10 11:36:57.552046] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3339163 ] 00:26:13.700 EAL: No free 2048 kB hugepages reported on node 1 00:26:13.700 [2024-06-10 11:36:57.635028] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:13.958 [2024-06-10 11:36:57.720762] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:26:14.216 INFO: Running with entropic power schedule (0xFF, 100). 00:26:14.216 INFO: Seed: 2159437274 00:26:14.216 INFO: Loaded 1 modules (354952 inline 8-bit counters): 354952 [0x296804c, 0x29bead4), 00:26:14.216 INFO: Loaded 1 PC tables (354952 PCs): 354952 [0x29bead8,0x2f29358), 00:26:14.216 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 00:26:14.216 INFO: A corpus is not provided, starting from an empty corpus 00:26:14.216 #2 INITED exec/s: 0 rss: 66Mb 00:26:14.216 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:26:14.216 This may also happen if the target rejected all inputs we tried so far 00:26:14.216 [2024-06-10 11:36:57.969941] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-3/domain/2: enabling controller 00:26:14.474 NEW_FUNC[1/647]: 0x483f00 in fuzz_vfio_user_dma_map /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:124 00:26:14.474 NEW_FUNC[2/647]: 0x4883b0 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:26:14.474 #47 NEW cov: 10905 ft: 10693 corp: 2/33b lim: 32 exec/s: 0 rss: 72Mb L: 32/32 MS: 5 InsertRepeatedBytes-InsertRepeatedBytes-ChangeBinInt-CopyPart-CopyPart- 00:26:14.732 #76 NEW cov: 10923 ft: 13301 corp: 3/65b lim: 32 exec/s: 0 rss: 73Mb L: 32/32 MS: 4 CrossOver-ChangeBinInt-ChangeBinInt-CopyPart- 00:26:14.732 #92 NEW cov: 10923 ft: 14489 corp: 4/97b lim: 32 exec/s: 0 rss: 74Mb L: 32/32 MS: 1 ChangeBit- 00:26:14.999 NEW_FUNC[1/1]: 0x1a43680 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:601 00:26:14.999 #93 NEW cov: 10940 ft: 15167 corp: 5/129b lim: 32 exec/s: 0 rss: 74Mb L: 32/32 MS: 1 ChangeBit- 00:26:14.999 #99 NEW cov: 10940 ft: 15598 corp: 6/161b lim: 32 exec/s: 0 rss: 74Mb L: 32/32 MS: 1 ChangeBit- 00:26:15.259 #100 NEW cov: 10940 ft: 15757 corp: 7/193b lim: 32 exec/s: 100 rss: 74Mb L: 32/32 MS: 1 CrossOver- 00:26:15.259 #101 NEW cov: 10940 ft: 16120 corp: 8/225b lim: 32 exec/s: 101 rss: 74Mb L: 32/32 MS: 1 CopyPart- 00:26:15.562 #102 NEW cov: 10940 ft: 16620 corp: 9/257b lim: 32 exec/s: 102 rss: 74Mb L: 32/32 MS: 1 CMP- DE: "\001\000\000\000\004\333U\376"- 00:26:15.562 [2024-06-10 11:36:59.271691] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: DMA region size 18446744073709489421 > max 8796093022208 00:26:15.562 [2024-06-10 11:36:59.271733] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: failed to add DMA region [0xd0afffffffffffe, 0xd0affffffff0d0b) offset=0xd09ffffffffffff flags=0x3: No space left on device 00:26:15.562 [2024-06-10 11:36:59.271746] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: msg0: cmd 2 failed: No space left on device 00:26:15.562 [2024-06-10 11:36:59.271764] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 2 return failure 00:26:15.562 NEW_FUNC[1/1]: 0x13d87c0 in vfio_user_log /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/vfio_user.c:3094 00:26:15.562 #113 NEW cov: 10951 ft: 16780 corp: 10/289b lim: 32 exec/s: 113 rss: 74Mb L: 32/32 MS: 1 PersAutoDict- DE: "\001\000\000\000\004\333U\376"- 00:26:15.562 #114 NEW cov: 10951 ft: 16920 corp: 11/321b lim: 32 exec/s: 114 rss: 75Mb L: 32/32 MS: 1 CrossOver- 00:26:15.820 #115 NEW cov: 10951 ft: 16977 corp: 12/353b lim: 32 exec/s: 115 rss: 75Mb L: 32/32 MS: 1 CopyPart- 00:26:15.820 #116 NEW cov: 10951 ft: 17051 corp: 13/385b lim: 32 exec/s: 116 rss: 75Mb L: 32/32 MS: 1 ShuffleBytes- 00:26:16.078 #117 NEW cov: 10958 ft: 17096 corp: 14/417b lim: 32 exec/s: 117 rss: 75Mb L: 32/32 MS: 1 CopyPart- 00:26:16.078 [2024-06-10 11:36:59.864667] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: DMA region size 18446744073709489421 > max 8796093022208 00:26:16.078 [2024-06-10 11:36:59.864701] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: failed to add DMA region [0xd0afffffffffffe, 0xd0affffffff0d0b) offset=0xd09ffffffffffff flags=0x3: No space left on device 00:26:16.078 [2024-06-10 11:36:59.864715] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: msg0: cmd 2 failed: No space left on device 00:26:16.078 [2024-06-10 11:36:59.864733] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 2 return failure 00:26:16.078 #118 NEW cov: 10958 ft: 17101 corp: 15/449b lim: 32 exec/s: 59 rss: 75Mb L: 32/32 MS: 1 ShuffleBytes- 00:26:16.078 #118 DONE cov: 10958 ft: 17101 corp: 15/449b lim: 32 exec/s: 59 rss: 75Mb 00:26:16.078 ###### Recommended dictionary. ###### 00:26:16.078 "\001\000\000\000\004\333U\376" # Uses: 5 00:26:16.078 ###### End of recommended dictionary. ###### 00:26:16.078 Done 118 runs in 2 second(s) 00:26:16.078 [2024-06-10 11:36:59.961462] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-3/domain/2: disabling controller 00:26:16.337 11:37:00 llvm_fuzz.vfio_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-3 /var/tmp/suppress_vfio_fuzz 00:26:16.337 11:37:00 llvm_fuzz.vfio_fuzz -- ../common.sh@72 -- # (( i++ )) 00:26:16.337 11:37:00 llvm_fuzz.vfio_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:26:16.337 11:37:00 llvm_fuzz.vfio_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 4 1 0x1 00:26:16.337 11:37:00 llvm_fuzz.vfio_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=4 00:26:16.337 11:37:00 llvm_fuzz.vfio_fuzz -- vfio/run.sh@23 -- # local timen=1 00:26:16.337 11:37:00 llvm_fuzz.vfio_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:26:16.337 11:37:00 llvm_fuzz.vfio_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 00:26:16.337 11:37:00 llvm_fuzz.vfio_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-4 00:26:16.337 11:37:00 llvm_fuzz.vfio_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-4/domain/1 00:26:16.337 11:37:00 llvm_fuzz.vfio_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-4/domain/2 00:26:16.337 11:37:00 llvm_fuzz.vfio_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-4/fuzz_vfio_json.conf 00:26:16.337 11:37:00 llvm_fuzz.vfio_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:26:16.337 11:37:00 llvm_fuzz.vfio_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:26:16.337 11:37:00 llvm_fuzz.vfio_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-4 /tmp/vfio-user-4/domain/1 /tmp/vfio-user-4/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 00:26:16.337 11:37:00 llvm_fuzz.vfio_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-4/domain/1%; 00:26:16.337 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-4/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:26:16.337 11:37:00 llvm_fuzz.vfio_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:26:16.338 11:37:00 llvm_fuzz.vfio_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:26:16.338 11:37:00 llvm_fuzz.vfio_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-4/domain/1 -c /tmp/vfio-user-4/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 -Y /tmp/vfio-user-4/domain/2 -r /tmp/vfio-user-4/spdk4.sock -Z 4 00:26:16.338 [2024-06-10 11:37:00.280777] Starting SPDK v24.09-pre git sha1 a3f6419f1 / DPDK 24.03.0 initialization... 00:26:16.338 [2024-06-10 11:37:00.280865] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3339544 ] 00:26:16.596 EAL: No free 2048 kB hugepages reported on node 1 00:26:16.596 [2024-06-10 11:37:00.364939] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:16.596 [2024-06-10 11:37:00.454533] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:26:16.854 INFO: Running with entropic power schedule (0xFF, 100). 00:26:16.855 INFO: Seed: 603465465 00:26:16.855 INFO: Loaded 1 modules (354952 inline 8-bit counters): 354952 [0x296804c, 0x29bead4), 00:26:16.855 INFO: Loaded 1 PC tables (354952 PCs): 354952 [0x29bead8,0x2f29358), 00:26:16.855 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 00:26:16.855 INFO: A corpus is not provided, starting from an empty corpus 00:26:16.855 #2 INITED exec/s: 0 rss: 66Mb 00:26:16.855 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:26:16.855 This may also happen if the target rejected all inputs we tried so far 00:26:16.855 [2024-06-10 11:37:00.708349] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-4/domain/2: enabling controller 00:26:16.855 [2024-06-10 11:37:00.780681] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: failed to memory map DMA region [(nil), (nil)) fd=323 offset=0 prot=0x3: Invalid argument 00:26:16.855 [2024-06-10 11:37:00.780709] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: failed to add DMA region [0, 0) offset=0 flags=0x3: Invalid argument 00:26:16.855 [2024-06-10 11:37:00.780720] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 2 failed: Invalid argument 00:26:16.855 [2024-06-10 11:37:00.780739] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 2 return failure 00:26:16.855 [2024-06-10 11:37:00.781703] vfio_user.c:3104:vfio_user_log: *WARNING*: /tmp/vfio-user-4/domain/1: failed to remove DMA region [0, 0) flags=0: No such file or directory 00:26:16.855 [2024-06-10 11:37:00.781724] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 3 failed: No such file or directory 00:26:16.855 [2024-06-10 11:37:00.781741] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 3 return failure 00:26:17.371 NEW_FUNC[1/648]: 0x484780 in fuzz_vfio_user_dma_unmap /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:144 00:26:17.371 NEW_FUNC[2/648]: 0x4883b0 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:26:17.371 #110 NEW cov: 10920 ft: 10537 corp: 2/33b lim: 32 exec/s: 0 rss: 72Mb L: 32/32 MS: 3 ShuffleBytes-ShuffleBytes-InsertRepeatedBytes- 00:26:17.371 [2024-06-10 11:37:01.269357] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: DMA region size 1152921504606846976 > max 8796093022208 00:26:17.371 [2024-06-10 11:37:01.269389] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: failed to add DMA region [0, 0x1000000000000000) offset=0 flags=0x3: No space left on device 00:26:17.371 [2024-06-10 11:37:01.269402] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 2 failed: No space left on device 00:26:17.371 [2024-06-10 11:37:01.269419] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 2 return failure 00:26:17.371 [2024-06-10 11:37:01.270376] vfio_user.c:3104:vfio_user_log: *WARNING*: /tmp/vfio-user-4/domain/1: failed to remove DMA region [0, 0x1000000000000000) flags=0: No such file or directory 00:26:17.371 [2024-06-10 11:37:01.270396] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 3 failed: No such file or directory 00:26:17.371 [2024-06-10 11:37:01.270413] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 3 return failure 00:26:17.628 #116 NEW cov: 10934 ft: 13844 corp: 3/65b lim: 32 exec/s: 0 rss: 73Mb L: 32/32 MS: 1 ChangeBit- 00:26:17.628 NEW_FUNC[1/1]: 0x1a43680 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:601 00:26:17.628 #117 NEW cov: 10951 ft: 15315 corp: 4/97b lim: 32 exec/s: 0 rss: 74Mb L: 32/32 MS: 1 ChangeBit- 00:26:17.886 #118 NEW cov: 10955 ft: 15840 corp: 5/129b lim: 32 exec/s: 118 rss: 74Mb L: 32/32 MS: 1 ChangeBinInt- 00:26:18.145 [2024-06-10 11:37:01.840024] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: DMA region size 11574427654092267680 > max 8796093022208 00:26:18.145 [2024-06-10 11:37:01.840052] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: failed to add DMA region [0xa0a0000000000000, 0x4140a0a0a0a0a0a0) offset=0xa0a0a0 flags=0x3: No space left on device 00:26:18.145 [2024-06-10 11:37:01.840064] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 2 failed: No space left on device 00:26:18.145 [2024-06-10 11:37:01.840081] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 2 return failure 00:26:18.145 [2024-06-10 11:37:01.841042] vfio_user.c:3104:vfio_user_log: *WARNING*: /tmp/vfio-user-4/domain/1: failed to remove DMA region [0xa0a0000000000000, 0x4140a0a0a0a0a0a0) flags=0: No such file or directory 00:26:18.145 [2024-06-10 11:37:01.841061] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 3 failed: No such file or directory 00:26:18.145 [2024-06-10 11:37:01.841078] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 3 return failure 00:26:18.145 #120 NEW cov: 10955 ft: 16739 corp: 6/161b lim: 32 exec/s: 120 rss: 74Mb L: 32/32 MS: 2 EraseBytes-InsertRepeatedBytes- 00:26:18.145 [2024-06-10 11:37:02.020325] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: DMA region size 2161727821137838080 > max 8796093022208 00:26:18.145 [2024-06-10 11:37:02.020349] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: failed to add DMA region [0, 0x1e00000000000000) offset=0x2a000000000000 flags=0x3: No space left on device 00:26:18.145 [2024-06-10 11:37:02.020361] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 2 failed: No space left on device 00:26:18.145 [2024-06-10 11:37:02.020377] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 2 return failure 00:26:18.145 [2024-06-10 11:37:02.021341] vfio_user.c:3104:vfio_user_log: *WARNING*: /tmp/vfio-user-4/domain/1: failed to remove DMA region [0, 0x1e00000000000000) flags=0: No such file or directory 00:26:18.145 [2024-06-10 11:37:02.021360] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 3 failed: No such file or directory 00:26:18.145 [2024-06-10 11:37:02.021377] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 3 return failure 00:26:18.402 #135 NEW cov: 10955 ft: 16837 corp: 7/193b lim: 32 exec/s: 135 rss: 74Mb L: 32/32 MS: 5 EraseBytes-ChangeBinInt-ChangeBit-InsertByte-CopyPart- 00:26:18.402 #138 NEW cov: 10955 ft: 17182 corp: 8/225b lim: 32 exec/s: 138 rss: 74Mb L: 32/32 MS: 3 ChangeBit-CrossOver-InsertRepeatedBytes- 00:26:18.660 [2024-06-10 11:37:02.392827] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: DMA region size 2161727824321166781 > max 8796093022208 00:26:18.660 [2024-06-10 11:37:02.392852] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: failed to add DMA region [0xbd00000000000000, 0xdb000000bdbdbdbd) offset=0x2a000000000000 flags=0x3: No space left on device 00:26:18.660 [2024-06-10 11:37:02.392864] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 2 failed: No space left on device 00:26:18.660 [2024-06-10 11:37:02.392880] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 2 return failure 00:26:18.661 [2024-06-10 11:37:02.393833] vfio_user.c:3104:vfio_user_log: *WARNING*: /tmp/vfio-user-4/domain/1: failed to remove DMA region [0xbd00000000000000, 0xdb000000bdbdbdbd) flags=0: No such file or directory 00:26:18.661 [2024-06-10 11:37:02.393853] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 3 failed: No such file or directory 00:26:18.661 [2024-06-10 11:37:02.393870] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 3 return failure 00:26:18.661 #139 NEW cov: 10962 ft: 17421 corp: 9/257b lim: 32 exec/s: 139 rss: 75Mb L: 32/32 MS: 1 CrossOver- 00:26:18.661 [2024-06-10 11:37:02.576805] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: DMA region size 1152921504606846976 > max 8796093022208 00:26:18.661 [2024-06-10 11:37:02.576829] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: failed to add DMA region [0, 0x1000000000000000) offset=0 flags=0x3: No space left on device 00:26:18.661 [2024-06-10 11:37:02.576840] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 2 failed: No space left on device 00:26:18.661 [2024-06-10 11:37:02.576860] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 2 return failure 00:26:18.661 [2024-06-10 11:37:02.577830] vfio_user.c:3104:vfio_user_log: *WARNING*: /tmp/vfio-user-4/domain/1: failed to remove DMA region [0, 0x1000000000000000) flags=0: No such file or directory 00:26:18.661 [2024-06-10 11:37:02.577850] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 3 failed: No such file or directory 00:26:18.661 [2024-06-10 11:37:02.577867] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 3 return failure 00:26:18.919 #140 NEW cov: 10962 ft: 17502 corp: 10/289b lim: 32 exec/s: 70 rss: 75Mb L: 32/32 MS: 1 ShuffleBytes- 00:26:18.919 #140 DONE cov: 10962 ft: 17502 corp: 10/289b lim: 32 exec/s: 70 rss: 75Mb 00:26:18.919 Done 140 runs in 2 second(s) 00:26:18.919 [2024-06-10 11:37:02.705475] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-4/domain/2: disabling controller 00:26:19.177 11:37:02 llvm_fuzz.vfio_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-4 /var/tmp/suppress_vfio_fuzz 00:26:19.177 11:37:02 llvm_fuzz.vfio_fuzz -- ../common.sh@72 -- # (( i++ )) 00:26:19.177 11:37:02 llvm_fuzz.vfio_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:26:19.177 11:37:02 llvm_fuzz.vfio_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 5 1 0x1 00:26:19.177 11:37:02 llvm_fuzz.vfio_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=5 00:26:19.177 11:37:02 llvm_fuzz.vfio_fuzz -- vfio/run.sh@23 -- # local timen=1 00:26:19.177 11:37:02 llvm_fuzz.vfio_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:26:19.177 11:37:02 llvm_fuzz.vfio_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 00:26:19.177 11:37:02 llvm_fuzz.vfio_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-5 00:26:19.177 11:37:02 llvm_fuzz.vfio_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-5/domain/1 00:26:19.177 11:37:02 llvm_fuzz.vfio_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-5/domain/2 00:26:19.177 11:37:02 llvm_fuzz.vfio_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-5/fuzz_vfio_json.conf 00:26:19.177 11:37:02 llvm_fuzz.vfio_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:26:19.177 11:37:02 llvm_fuzz.vfio_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:26:19.177 11:37:02 llvm_fuzz.vfio_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-5 /tmp/vfio-user-5/domain/1 /tmp/vfio-user-5/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 00:26:19.177 11:37:02 llvm_fuzz.vfio_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-5/domain/1%; 00:26:19.177 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-5/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:26:19.177 11:37:02 llvm_fuzz.vfio_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:26:19.177 11:37:02 llvm_fuzz.vfio_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:26:19.177 11:37:02 llvm_fuzz.vfio_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-5/domain/1 -c /tmp/vfio-user-5/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 -Y /tmp/vfio-user-5/domain/2 -r /tmp/vfio-user-5/spdk5.sock -Z 5 00:26:19.177 [2024-06-10 11:37:03.013151] Starting SPDK v24.09-pre git sha1 a3f6419f1 / DPDK 24.03.0 initialization... 00:26:19.177 [2024-06-10 11:37:03.013255] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3339919 ] 00:26:19.177 EAL: No free 2048 kB hugepages reported on node 1 00:26:19.177 [2024-06-10 11:37:03.095436] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:19.435 [2024-06-10 11:37:03.184740] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:26:19.435 INFO: Running with entropic power schedule (0xFF, 100). 00:26:19.435 INFO: Seed: 3334456174 00:26:19.693 INFO: Loaded 1 modules (354952 inline 8-bit counters): 354952 [0x296804c, 0x29bead4), 00:26:19.693 INFO: Loaded 1 PC tables (354952 PCs): 354952 [0x29bead8,0x2f29358), 00:26:19.693 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 00:26:19.693 INFO: A corpus is not provided, starting from an empty corpus 00:26:19.693 #2 INITED exec/s: 0 rss: 66Mb 00:26:19.693 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:26:19.693 This may also happen if the target rejected all inputs we tried so far 00:26:19.693 [2024-06-10 11:37:03.437905] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-5/domain/2: enabling controller 00:26:19.694 [2024-06-10 11:37:03.491321] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:26:19.694 [2024-06-10 11:37:03.491357] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:26:19.952 NEW_FUNC[1/648]: 0x485180 in fuzz_vfio_user_irq_set /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:171 00:26:19.953 NEW_FUNC[2/648]: 0x4883b0 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:26:19.953 #20 NEW cov: 10917 ft: 10749 corp: 2/14b lim: 13 exec/s: 0 rss: 72Mb L: 13/13 MS: 3 ChangeBit-InsertRepeatedBytes-InsertRepeatedBytes- 00:26:20.211 [2024-06-10 11:37:03.972566] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:26:20.211 [2024-06-10 11:37:03.972616] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:26:20.211 #26 NEW cov: 10931 ft: 13524 corp: 3/27b lim: 13 exec/s: 0 rss: 73Mb L: 13/13 MS: 1 CopyPart- 00:26:20.470 [2024-06-10 11:37:04.163856] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:26:20.470 [2024-06-10 11:37:04.163890] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:26:20.470 NEW_FUNC[1/1]: 0x1a43680 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:601 00:26:20.470 #27 NEW cov: 10948 ft: 15261 corp: 4/40b lim: 13 exec/s: 0 rss: 74Mb L: 13/13 MS: 1 ShuffleBytes- 00:26:20.470 [2024-06-10 11:37:04.366470] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:26:20.470 [2024-06-10 11:37:04.366502] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:26:20.730 #33 NEW cov: 10948 ft: 15393 corp: 5/53b lim: 13 exec/s: 33 rss: 74Mb L: 13/13 MS: 1 CopyPart- 00:26:20.730 [2024-06-10 11:37:04.556835] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:26:20.730 [2024-06-10 11:37:04.556866] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:26:20.730 #39 NEW cov: 10948 ft: 16242 corp: 6/66b lim: 13 exec/s: 39 rss: 74Mb L: 13/13 MS: 1 ShuffleBytes- 00:26:20.989 [2024-06-10 11:37:04.743915] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:26:20.989 [2024-06-10 11:37:04.743948] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:26:20.989 #45 NEW cov: 10948 ft: 16484 corp: 7/79b lim: 13 exec/s: 45 rss: 74Mb L: 13/13 MS: 1 CopyPart- 00:26:20.989 [2024-06-10 11:37:04.934153] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:26:20.989 [2024-06-10 11:37:04.934185] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:26:21.248 #46 NEW cov: 10948 ft: 16641 corp: 8/92b lim: 13 exec/s: 46 rss: 74Mb L: 13/13 MS: 1 CMP- DE: "\377\017"- 00:26:21.248 [2024-06-10 11:37:05.123597] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:26:21.248 [2024-06-10 11:37:05.123627] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:26:21.506 #48 NEW cov: 10955 ft: 17221 corp: 9/105b lim: 13 exec/s: 48 rss: 74Mb L: 13/13 MS: 2 EraseBytes-InsertByte- 00:26:21.506 [2024-06-10 11:37:05.311725] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:26:21.506 [2024-06-10 11:37:05.311759] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:26:21.506 #49 NEW cov: 10955 ft: 17329 corp: 10/118b lim: 13 exec/s: 24 rss: 74Mb L: 13/13 MS: 1 ShuffleBytes- 00:26:21.506 #49 DONE cov: 10955 ft: 17329 corp: 10/118b lim: 13 exec/s: 24 rss: 74Mb 00:26:21.506 ###### Recommended dictionary. ###### 00:26:21.506 "\377\017" # Uses: 0 00:26:21.506 ###### End of recommended dictionary. ###### 00:26:21.506 Done 49 runs in 2 second(s) 00:26:21.506 [2024-06-10 11:37:05.446444] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-5/domain/2: disabling controller 00:26:21.766 11:37:05 llvm_fuzz.vfio_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-5 /var/tmp/suppress_vfio_fuzz 00:26:21.766 11:37:05 llvm_fuzz.vfio_fuzz -- ../common.sh@72 -- # (( i++ )) 00:26:21.766 11:37:05 llvm_fuzz.vfio_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:26:21.766 11:37:05 llvm_fuzz.vfio_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 6 1 0x1 00:26:21.766 11:37:05 llvm_fuzz.vfio_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=6 00:26:21.766 11:37:05 llvm_fuzz.vfio_fuzz -- vfio/run.sh@23 -- # local timen=1 00:26:21.766 11:37:05 llvm_fuzz.vfio_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:26:21.766 11:37:05 llvm_fuzz.vfio_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 00:26:22.025 11:37:05 llvm_fuzz.vfio_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-6 00:26:22.025 11:37:05 llvm_fuzz.vfio_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-6/domain/1 00:26:22.025 11:37:05 llvm_fuzz.vfio_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-6/domain/2 00:26:22.025 11:37:05 llvm_fuzz.vfio_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-6/fuzz_vfio_json.conf 00:26:22.025 11:37:05 llvm_fuzz.vfio_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:26:22.025 11:37:05 llvm_fuzz.vfio_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:26:22.025 11:37:05 llvm_fuzz.vfio_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-6 /tmp/vfio-user-6/domain/1 /tmp/vfio-user-6/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 00:26:22.025 11:37:05 llvm_fuzz.vfio_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-6/domain/1%; 00:26:22.025 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-6/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:26:22.025 11:37:05 llvm_fuzz.vfio_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:26:22.025 11:37:05 llvm_fuzz.vfio_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:26:22.025 11:37:05 llvm_fuzz.vfio_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-6/domain/1 -c /tmp/vfio-user-6/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 -Y /tmp/vfio-user-6/domain/2 -r /tmp/vfio-user-6/spdk6.sock -Z 6 00:26:22.025 [2024-06-10 11:37:05.757467] Starting SPDK v24.09-pre git sha1 a3f6419f1 / DPDK 24.03.0 initialization... 00:26:22.025 [2024-06-10 11:37:05.757562] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3340308 ] 00:26:22.025 EAL: No free 2048 kB hugepages reported on node 1 00:26:22.025 [2024-06-10 11:37:05.838158] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:22.025 [2024-06-10 11:37:05.925513] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:26:22.283 INFO: Running with entropic power schedule (0xFF, 100). 00:26:22.283 INFO: Seed: 1780484312 00:26:22.283 INFO: Loaded 1 modules (354952 inline 8-bit counters): 354952 [0x296804c, 0x29bead4), 00:26:22.283 INFO: Loaded 1 PC tables (354952 PCs): 354952 [0x29bead8,0x2f29358), 00:26:22.283 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 00:26:22.283 INFO: A corpus is not provided, starting from an empty corpus 00:26:22.283 #2 INITED exec/s: 0 rss: 66Mb 00:26:22.283 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:26:22.283 This may also happen if the target rejected all inputs we tried so far 00:26:22.283 [2024-06-10 11:37:06.178879] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-6/domain/2: enabling controller 00:26:22.283 [2024-06-10 11:37:06.230314] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:26:22.283 [2024-06-10 11:37:06.230349] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:26:22.800 NEW_FUNC[1/648]: 0x485e70 in fuzz_vfio_user_set_msix /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:190 00:26:22.800 NEW_FUNC[2/648]: 0x4883b0 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:26:22.800 #6 NEW cov: 10909 ft: 10764 corp: 2/10b lim: 9 exec/s: 0 rss: 72Mb L: 9/9 MS: 4 CrossOver-ChangeByte-EraseBytes-CMP- DE: "\001\000\000\000\000\000\000\000"- 00:26:22.800 [2024-06-10 11:37:06.706055] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:26:22.800 [2024-06-10 11:37:06.706101] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:26:23.059 #8 NEW cov: 10924 ft: 14564 corp: 3/19b lim: 9 exec/s: 0 rss: 73Mb L: 9/9 MS: 2 CrossOver-CopyPart- 00:26:23.059 [2024-06-10 11:37:06.881294] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:26:23.059 [2024-06-10 11:37:06.881330] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:26:23.059 NEW_FUNC[1/1]: 0x1a43680 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:601 00:26:23.059 #9 NEW cov: 10943 ft: 15218 corp: 4/28b lim: 9 exec/s: 0 rss: 74Mb L: 9/9 MS: 1 CopyPart- 00:26:23.326 [2024-06-10 11:37:07.057547] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:26:23.326 [2024-06-10 11:37:07.057582] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:26:23.326 #18 NEW cov: 10943 ft: 15697 corp: 5/37b lim: 9 exec/s: 18 rss: 74Mb L: 9/9 MS: 4 ShuffleBytes-CopyPart-InsertRepeatedBytes-InsertByte- 00:26:23.326 [2024-06-10 11:37:07.232357] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:26:23.326 [2024-06-10 11:37:07.232389] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:26:23.596 #19 NEW cov: 10943 ft: 16622 corp: 6/46b lim: 9 exec/s: 19 rss: 74Mb L: 9/9 MS: 1 CrossOver- 00:26:23.596 [2024-06-10 11:37:07.404027] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:26:23.596 [2024-06-10 11:37:07.404059] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:26:23.596 #20 NEW cov: 10943 ft: 16893 corp: 7/55b lim: 9 exec/s: 20 rss: 74Mb L: 9/9 MS: 1 CrossOver- 00:26:23.854 [2024-06-10 11:37:07.572640] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:26:23.854 [2024-06-10 11:37:07.572674] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:26:23.854 #21 NEW cov: 10943 ft: 17337 corp: 8/64b lim: 9 exec/s: 21 rss: 74Mb L: 9/9 MS: 1 ChangeBinInt- 00:26:23.854 [2024-06-10 11:37:07.746943] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:26:23.854 [2024-06-10 11:37:07.746976] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:26:24.112 #22 NEW cov: 10943 ft: 17384 corp: 9/73b lim: 9 exec/s: 22 rss: 74Mb L: 9/9 MS: 1 ChangeBit- 00:26:24.112 [2024-06-10 11:37:07.920939] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:26:24.112 [2024-06-10 11:37:07.920970] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:26:24.113 #23 NEW cov: 10950 ft: 17429 corp: 10/82b lim: 9 exec/s: 23 rss: 74Mb L: 9/9 MS: 1 ChangeBinInt- 00:26:24.371 [2024-06-10 11:37:08.090276] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:26:24.371 [2024-06-10 11:37:08.090309] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:26:24.371 #24 NEW cov: 10950 ft: 17713 corp: 11/91b lim: 9 exec/s: 12 rss: 75Mb L: 9/9 MS: 1 ChangeByte- 00:26:24.371 #24 DONE cov: 10950 ft: 17713 corp: 11/91b lim: 9 exec/s: 12 rss: 75Mb 00:26:24.371 ###### Recommended dictionary. ###### 00:26:24.371 "\001\000\000\000\000\000\000\000" # Uses: 0 00:26:24.371 ###### End of recommended dictionary. ###### 00:26:24.371 Done 24 runs in 2 second(s) 00:26:24.371 [2024-06-10 11:37:08.213453] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-6/domain/2: disabling controller 00:26:24.629 11:37:08 llvm_fuzz.vfio_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-6 /var/tmp/suppress_vfio_fuzz 00:26:24.629 11:37:08 llvm_fuzz.vfio_fuzz -- ../common.sh@72 -- # (( i++ )) 00:26:24.629 11:37:08 llvm_fuzz.vfio_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:26:24.629 11:37:08 llvm_fuzz.vfio_fuzz -- vfio/run.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:26:24.629 00:26:24.629 real 0m19.614s 00:26:24.629 user 0m26.733s 00:26:24.629 sys 0m2.053s 00:26:24.629 11:37:08 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@1125 -- # xtrace_disable 00:26:24.629 11:37:08 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@10 -- # set +x 00:26:24.629 ************************************ 00:26:24.629 END TEST vfio_fuzz 00:26:24.629 ************************************ 00:26:24.629 11:37:08 llvm_fuzz -- fuzz/llvm.sh@67 -- # [[ 1 -eq 0 ]] 00:26:24.629 00:26:24.629 real 1m28.592s 00:26:24.629 user 2m8.006s 00:26:24.629 sys 0m12.742s 00:26:24.629 11:37:08 llvm_fuzz -- common/autotest_common.sh@1125 -- # xtrace_disable 00:26:24.629 11:37:08 llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:26:24.629 ************************************ 00:26:24.629 END TEST llvm_fuzz 00:26:24.629 ************************************ 00:26:24.885 11:37:08 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:26:24.885 11:37:08 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:26:24.885 11:37:08 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:26:24.885 11:37:08 -- common/autotest_common.sh@723 -- # xtrace_disable 00:26:24.885 11:37:08 -- common/autotest_common.sh@10 -- # set +x 00:26:24.885 11:37:08 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:26:24.885 11:37:08 -- common/autotest_common.sh@1391 -- # local autotest_es=0 00:26:24.885 11:37:08 -- common/autotest_common.sh@1392 -- # xtrace_disable 00:26:24.885 11:37:08 -- common/autotest_common.sh@10 -- # set +x 00:26:29.066 INFO: APP EXITING 00:26:29.066 INFO: killing all VMs 00:26:29.066 INFO: killing vhost app 00:26:29.066 WARN: no vhost pid file found 00:26:29.066 INFO: EXIT DONE 00:26:31.594 Waiting for block devices as requested 00:26:31.594 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:26:31.594 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:26:31.594 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:26:31.594 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:26:31.851 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:26:31.851 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:26:31.851 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:26:31.851 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:26:32.109 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:26:32.109 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:26:32.109 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:26:32.369 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:26:32.369 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:26:32.369 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:26:32.627 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:26:32.627 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:26:32.627 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:26:35.907 Cleaning 00:26:35.907 Removing: /dev/shm/spdk_tgt_trace.pid3311779 00:26:35.907 Removing: /var/run/dpdk/spdk_pid3308618 00:26:35.907 Removing: /var/run/dpdk/spdk_pid3310166 00:26:35.907 Removing: /var/run/dpdk/spdk_pid3311779 00:26:35.907 Removing: /var/run/dpdk/spdk_pid3312326 00:26:35.907 Removing: /var/run/dpdk/spdk_pid3313064 00:26:35.907 Removing: /var/run/dpdk/spdk_pid3313284 00:26:35.907 Removing: /var/run/dpdk/spdk_pid3314072 00:26:35.907 Removing: /var/run/dpdk/spdk_pid3314092 00:26:35.907 Removing: /var/run/dpdk/spdk_pid3314410 00:26:35.907 Removing: /var/run/dpdk/spdk_pid3314662 00:26:35.907 Removing: /var/run/dpdk/spdk_pid3315029 00:26:35.907 Removing: /var/run/dpdk/spdk_pid3315284 00:26:35.907 Removing: /var/run/dpdk/spdk_pid3315528 00:26:35.907 Removing: /var/run/dpdk/spdk_pid3315725 00:26:35.907 Removing: /var/run/dpdk/spdk_pid3315919 00:26:35.907 Removing: /var/run/dpdk/spdk_pid3316144 00:26:35.907 Removing: /var/run/dpdk/spdk_pid3316894 00:26:35.907 Removing: /var/run/dpdk/spdk_pid3319238 00:26:35.907 Removing: /var/run/dpdk/spdk_pid3319555 00:26:35.907 Removing: /var/run/dpdk/spdk_pid3319823 00:26:35.907 Removing: /var/run/dpdk/spdk_pid3319853 00:26:35.907 Removing: /var/run/dpdk/spdk_pid3320386 00:26:36.166 Removing: /var/run/dpdk/spdk_pid3320404 00:26:36.166 Removing: /var/run/dpdk/spdk_pid3320954 00:26:36.166 Removing: /var/run/dpdk/spdk_pid3320973 00:26:36.166 Removing: /var/run/dpdk/spdk_pid3321342 00:26:36.166 Removing: /var/run/dpdk/spdk_pid3321361 00:26:36.166 Removing: /var/run/dpdk/spdk_pid3321563 00:26:36.166 Removing: /var/run/dpdk/spdk_pid3321741 00:26:36.166 Removing: /var/run/dpdk/spdk_pid3322154 00:26:36.166 Removing: /var/run/dpdk/spdk_pid3322334 00:26:36.166 Removing: /var/run/dpdk/spdk_pid3322519 00:26:36.166 Removing: /var/run/dpdk/spdk_pid3322655 00:26:36.166 Removing: /var/run/dpdk/spdk_pid3322867 00:26:36.166 Removing: /var/run/dpdk/spdk_pid3323044 00:26:36.166 Removing: /var/run/dpdk/spdk_pid3323118 00:26:36.166 Removing: /var/run/dpdk/spdk_pid3323316 00:26:36.166 Removing: /var/run/dpdk/spdk_pid3323515 00:26:36.166 Removing: /var/run/dpdk/spdk_pid3323712 00:26:36.166 Removing: /var/run/dpdk/spdk_pid3323916 00:26:36.166 Removing: /var/run/dpdk/spdk_pid3324148 00:26:36.166 Removing: /var/run/dpdk/spdk_pid3324394 00:26:36.166 Removing: /var/run/dpdk/spdk_pid3324643 00:26:36.166 Removing: /var/run/dpdk/spdk_pid3324851 00:26:36.166 Removing: /var/run/dpdk/spdk_pid3325050 00:26:36.166 Removing: /var/run/dpdk/spdk_pid3325244 00:26:36.166 Removing: /var/run/dpdk/spdk_pid3325443 00:26:36.166 Removing: /var/run/dpdk/spdk_pid3325638 00:26:36.166 Removing: /var/run/dpdk/spdk_pid3325842 00:26:36.166 Removing: /var/run/dpdk/spdk_pid3326033 00:26:36.167 Removing: /var/run/dpdk/spdk_pid3326226 00:26:36.167 Removing: /var/run/dpdk/spdk_pid3326426 00:26:36.167 Removing: /var/run/dpdk/spdk_pid3326630 00:26:36.167 Removing: /var/run/dpdk/spdk_pid3326878 00:26:36.167 Removing: /var/run/dpdk/spdk_pid3327118 00:26:36.167 Removing: /var/run/dpdk/spdk_pid3327378 00:26:36.167 Removing: /var/run/dpdk/spdk_pid3327447 00:26:36.167 Removing: /var/run/dpdk/spdk_pid3327714 00:26:36.167 Removing: /var/run/dpdk/spdk_pid3328251 00:26:36.167 Removing: /var/run/dpdk/spdk_pid3328610 00:26:36.167 Removing: /var/run/dpdk/spdk_pid3328974 00:26:36.167 Removing: /var/run/dpdk/spdk_pid3329328 00:26:36.167 Removing: /var/run/dpdk/spdk_pid3329699 00:26:36.167 Removing: /var/run/dpdk/spdk_pid3330067 00:26:36.167 Removing: /var/run/dpdk/spdk_pid3330437 00:26:36.167 Removing: /var/run/dpdk/spdk_pid3330835 00:26:36.167 Removing: /var/run/dpdk/spdk_pid3331348 00:26:36.167 Removing: /var/run/dpdk/spdk_pid3331940 00:26:36.167 Removing: /var/run/dpdk/spdk_pid3332496 00:26:36.167 Removing: /var/run/dpdk/spdk_pid3332871 00:26:36.167 Removing: /var/run/dpdk/spdk_pid3333225 00:26:36.167 Removing: /var/run/dpdk/spdk_pid3333591 00:26:36.167 Removing: /var/run/dpdk/spdk_pid3333951 00:26:36.167 Removing: /var/run/dpdk/spdk_pid3334307 00:26:36.167 Removing: /var/run/dpdk/spdk_pid3334669 00:26:36.167 Removing: /var/run/dpdk/spdk_pid3335031 00:26:36.167 Removing: /var/run/dpdk/spdk_pid3335390 00:26:36.167 Removing: /var/run/dpdk/spdk_pid3335751 00:26:36.167 Removing: /var/run/dpdk/spdk_pid3336111 00:26:36.167 Removing: /var/run/dpdk/spdk_pid3336473 00:26:36.167 Removing: /var/run/dpdk/spdk_pid3336828 00:26:36.167 Removing: /var/run/dpdk/spdk_pid3337188 00:26:36.167 Removing: /var/run/dpdk/spdk_pid3337555 00:26:36.167 Removing: /var/run/dpdk/spdk_pid3337994 00:26:36.167 Removing: /var/run/dpdk/spdk_pid3338381 00:26:36.167 Removing: /var/run/dpdk/spdk_pid3338787 00:26:36.167 Removing: /var/run/dpdk/spdk_pid3339163 00:26:36.167 Removing: /var/run/dpdk/spdk_pid3339544 00:26:36.167 Removing: /var/run/dpdk/spdk_pid3339919 00:26:36.425 Removing: /var/run/dpdk/spdk_pid3340308 00:26:36.425 Clean 00:26:36.426 11:37:20 -- common/autotest_common.sh@1450 -- # return 0 00:26:36.426 11:37:20 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:26:36.426 11:37:20 -- common/autotest_common.sh@729 -- # xtrace_disable 00:26:36.426 11:37:20 -- common/autotest_common.sh@10 -- # set +x 00:26:36.426 11:37:20 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:26:36.426 11:37:20 -- common/autotest_common.sh@729 -- # xtrace_disable 00:26:36.426 11:37:20 -- common/autotest_common.sh@10 -- # set +x 00:26:36.426 11:37:20 -- spdk/autotest.sh@387 -- # chmod a+r /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/timing.txt 00:26:36.426 11:37:20 -- spdk/autotest.sh@389 -- # [[ -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/udev.log ]] 00:26:36.426 11:37:20 -- spdk/autotest.sh@389 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/udev.log 00:26:36.426 11:37:20 -- spdk/autotest.sh@391 -- # hash lcov 00:26:36.426 11:37:20 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=clang == *\c\l\a\n\g* ]] 00:26:36.426 11:37:20 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:26:36.426 11:37:20 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:26:36.426 11:37:20 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:36.426 11:37:20 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:36.426 11:37:20 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:36.426 11:37:20 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:36.426 11:37:20 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:36.426 11:37:20 -- paths/export.sh@5 -- $ export PATH 00:26:36.426 11:37:20 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:36.426 11:37:20 -- common/autobuild_common.sh@436 -- $ out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output 00:26:36.426 11:37:20 -- common/autobuild_common.sh@437 -- $ date +%s 00:26:36.426 11:37:20 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1718012240.XXXXXX 00:26:36.426 11:37:20 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1718012240.ftYufh 00:26:36.426 11:37:20 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:26:36.426 11:37:20 -- common/autobuild_common.sh@443 -- $ '[' -n '' ']' 00:26:36.426 11:37:20 -- common/autobuild_common.sh@446 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/' 00:26:36.426 11:37:20 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/xnvme --exclude /tmp' 00:26:36.684 11:37:20 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:26:36.685 11:37:20 -- common/autobuild_common.sh@453 -- $ get_config_params 00:26:36.685 11:37:20 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:26:36.685 11:37:20 -- common/autotest_common.sh@10 -- $ set +x 00:26:36.685 11:37:20 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:26:36.685 11:37:20 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:26:36.685 11:37:20 -- pm/common@17 -- $ local monitor 00:26:36.685 11:37:20 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:26:36.685 11:37:20 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:26:36.685 11:37:20 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:26:36.685 11:37:20 -- pm/common@21 -- $ date +%s 00:26:36.685 11:37:20 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:26:36.685 11:37:20 -- pm/common@21 -- $ date +%s 00:26:36.685 11:37:20 -- pm/common@25 -- $ sleep 1 00:26:36.685 11:37:20 -- pm/common@21 -- $ date +%s 00:26:36.685 11:37:20 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1718012240 00:26:36.685 11:37:20 -- pm/common@21 -- $ date +%s 00:26:36.685 11:37:20 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1718012240 00:26:36.685 11:37:20 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1718012240 00:26:36.685 11:37:20 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1718012240 00:26:36.685 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1718012240_collect-vmstat.pm.log 00:26:36.685 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1718012240_collect-cpu-load.pm.log 00:26:36.685 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1718012240_collect-cpu-temp.pm.log 00:26:36.685 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1718012240_collect-bmc-pm.bmc.pm.log 00:26:37.622 11:37:21 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:26:37.622 11:37:21 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j72 00:26:37.622 11:37:21 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:26:37.622 11:37:21 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:26:37.622 11:37:21 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:26:37.622 11:37:21 -- spdk/autopackage.sh@19 -- $ timing_finish 00:26:37.622 11:37:21 -- common/autotest_common.sh@735 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:26:37.622 11:37:21 -- common/autotest_common.sh@736 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:26:37.622 11:37:21 -- common/autotest_common.sh@738 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/timing.txt 00:26:37.622 11:37:21 -- spdk/autopackage.sh@20 -- $ exit 0 00:26:37.622 11:37:21 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:26:37.622 11:37:21 -- pm/common@29 -- $ signal_monitor_resources TERM 00:26:37.622 11:37:21 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:26:37.622 11:37:21 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:26:37.622 11:37:21 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:26:37.622 11:37:21 -- pm/common@44 -- $ pid=3345638 00:26:37.622 11:37:21 -- pm/common@50 -- $ kill -TERM 3345638 00:26:37.622 11:37:21 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:26:37.622 11:37:21 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:26:37.622 11:37:21 -- pm/common@44 -- $ pid=3345640 00:26:37.622 11:37:21 -- pm/common@50 -- $ kill -TERM 3345640 00:26:37.622 11:37:21 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:26:37.622 11:37:21 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:26:37.622 11:37:21 -- pm/common@44 -- $ pid=3345642 00:26:37.622 11:37:21 -- pm/common@50 -- $ kill -TERM 3345642 00:26:37.622 11:37:21 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:26:37.622 11:37:21 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:26:37.622 11:37:21 -- pm/common@44 -- $ pid=3345675 00:26:37.622 11:37:21 -- pm/common@50 -- $ sudo -E kill -TERM 3345675 00:26:37.622 + [[ -n 3209650 ]] 00:26:37.622 + sudo kill 3209650 00:26:37.632 [Pipeline] } 00:26:37.651 [Pipeline] // stage 00:26:37.656 [Pipeline] } 00:26:37.677 [Pipeline] // timeout 00:26:37.682 [Pipeline] } 00:26:37.701 [Pipeline] // catchError 00:26:37.707 [Pipeline] } 00:26:37.729 [Pipeline] // wrap 00:26:37.735 [Pipeline] } 00:26:37.752 [Pipeline] // catchError 00:26:37.761 [Pipeline] stage 00:26:37.763 [Pipeline] { (Epilogue) 00:26:37.779 [Pipeline] catchError 00:26:37.781 [Pipeline] { 00:26:37.796 [Pipeline] echo 00:26:37.798 Cleanup processes 00:26:37.804 [Pipeline] sh 00:26:38.087 + sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:26:38.087 3345818 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/sdr.cache 00:26:38.087 3346558 sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:26:38.101 [Pipeline] sh 00:26:38.384 ++ sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:26:38.384 ++ grep -v 'sudo pgrep' 00:26:38.384 ++ awk '{print $1}' 00:26:38.384 + sudo kill -9 3345818 00:26:38.393 [Pipeline] sh 00:26:38.671 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:26:39.645 [Pipeline] sh 00:26:39.928 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:26:39.928 Artifacts sizes are good 00:26:39.942 [Pipeline] archiveArtifacts 00:26:39.948 Archiving artifacts 00:26:39.999 [Pipeline] sh 00:26:40.283 + sudo chown -R sys_sgci /var/jenkins/workspace/short-fuzz-phy-autotest 00:26:40.297 [Pipeline] cleanWs 00:26:40.307 [WS-CLEANUP] Deleting project workspace... 00:26:40.307 [WS-CLEANUP] Deferred wipeout is used... 00:26:40.313 [WS-CLEANUP] done 00:26:40.315 [Pipeline] } 00:26:40.336 [Pipeline] // catchError 00:26:40.350 [Pipeline] sh 00:26:40.658 + logger -p user.info -t JENKINS-CI 00:26:40.667 [Pipeline] } 00:26:40.686 [Pipeline] // stage 00:26:40.692 [Pipeline] } 00:26:40.712 [Pipeline] // node 00:26:40.717 [Pipeline] End of Pipeline 00:26:40.755 Finished: SUCCESS